Test Report: Docker_Linux_crio 21918

                    
                      08454a179ffa60c8ae500105aac58654b5cdef58:2025-11-19:42399
                    
                

Test fail (38/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 13.74
36 TestAddons/parallel/RegistryCreds 0.4
37 TestAddons/parallel/Ingress 145.84
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.31
41 TestAddons/parallel/CSI 44.23
42 TestAddons/parallel/Headlamp 2.52
43 TestAddons/parallel/CloudSpanner 5.24
44 TestAddons/parallel/LocalPath 10.1
45 TestAddons/parallel/NvidiaDevicePlugin 6.25
46 TestAddons/parallel/Yakd 5.24
47 TestAddons/parallel/AmdGpuDevicePlugin 6.26
97 TestFunctional/parallel/ServiceCmdConnect 602.69
119 TestFunctional/parallel/ImageCommands/ImageListShort 2.27
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.45
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.28
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
137 TestFunctional/parallel/ServiceCmd/DeployApp 600.54
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
153 TestFunctional/parallel/ServiceCmd/Format 0.51
154 TestFunctional/parallel/ServiceCmd/URL 0.51
191 TestJSONOutput/pause/Command 2.24
197 TestJSONOutput/unpause/Command 1.58
263 TestPause/serial/Pause 7.76
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.02
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.88
313 TestStartStop/group/old-k8s-version/serial/Pause 5.63
320 TestStartStop/group/no-preload/serial/Pause 6.14
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.1
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.95
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.13
337 TestStartStop/group/newest-cni/serial/Pause 5.05
345 TestStartStop/group/embed-certs/serial/Pause 5.78
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.51
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable volcano --alsologtostderr -v=1: exit status 11 (243.171642ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:12.430500   22252 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:12.430803   22252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:12.430826   22252 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:12.430833   22252 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:12.431010   22252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:12.431238   22252 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:12.431579   22252 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:12.431596   22252 addons.go:607] checking whether the cluster is paused
	I1119 21:49:12.431678   22252 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:12.431690   22252 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:12.432032   22252 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:12.450445   22252 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:12.450480   22252 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:12.467982   22252 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:12.558009   22252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:12.558124   22252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:12.588162   22252 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:12.588191   22252 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:12.588195   22252 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:12.588198   22252 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:12.588200   22252 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:12.588203   22252 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:12.588206   22252 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:12.588209   22252 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:12.588211   22252 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:12.588218   22252 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:12.588221   22252 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:12.588223   22252 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:12.588225   22252 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:12.588229   22252 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:12.588231   22252 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:12.588238   22252 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:12.588240   22252 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:12.588245   22252 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:12.588247   22252 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:12.588249   22252 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:12.588252   22252 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:12.588254   22252 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:12.588256   22252 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:12.588259   22252 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:12.588261   22252 cri.go:89] found id: ""
	I1119 21:49:12.588303   22252 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:12.603916   22252 out.go:203] 
	W1119 21:49:12.604936   22252 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:12.604953   22252 out.go:285] * 
	* 
	W1119 21:49:12.608193   22252 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:12.609249   22252 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 4.511621ms
I1119 21:49:20.375289   12829 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1119 21:49:20.375307   12829 kapi.go:107] duration metric: took 4.626137ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002834924s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002981764s
addons_test.go:392: (dbg) Run:  kubectl --context addons-418049 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-418049 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-418049 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.30589389s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 ip
2025/11/19 21:49:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable registry --alsologtostderr -v=1: exit status 11 (226.439583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:33.935310   23664 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:33.935450   23664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:33.935459   23664 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:33.935463   23664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:33.935651   23664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:33.935956   23664 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:33.936351   23664 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:33.936368   23664 addons.go:607] checking whether the cluster is paused
	I1119 21:49:33.936452   23664 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:33.936464   23664 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:33.937676   23664 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:33.954956   23664 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:33.955001   23664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:33.970711   23664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:34.060757   23664 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:34.060861   23664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:34.087737   23664 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:34.087753   23664 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:34.087757   23664 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:34.087760   23664 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:34.087763   23664 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:34.087765   23664 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:34.087768   23664 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:34.087770   23664 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:34.087773   23664 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:34.087777   23664 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:34.087780   23664 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:34.087782   23664 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:34.087785   23664 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:34.087787   23664 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:34.087790   23664 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:34.087794   23664 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:34.087801   23664 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:34.087805   23664 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:34.087809   23664 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:34.087823   23664 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:34.087831   23664 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:34.087836   23664 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:34.087839   23664 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:34.087843   23664 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:34.087847   23664 cri.go:89] found id: ""
	I1119 21:49:34.087884   23664 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:34.100715   23664 out.go:203] 
	W1119 21:49:34.101695   23664 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:34.101711   23664 out.go:285] * 
	* 
	W1119 21:49:34.104641   23664 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:34.105628   23664 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.74s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.626322ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-418049
addons_test.go:332: (dbg) Run:  kubectl --context addons-418049 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (242.557964ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:40.870694   25317 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:40.871112   25317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:40.871127   25317 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:40.871135   25317 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:40.871570   25317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:40.872209   25317 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:40.872535   25317 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:40.872549   25317 addons.go:607] checking whether the cluster is paused
	I1119 21:49:40.872630   25317 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:40.872641   25317 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:40.873013   25317 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:40.890693   25317 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:40.890738   25317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:40.906256   25317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:40.996713   25317 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:40.996838   25317 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:41.028668   25317 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:41.028691   25317 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:41.028694   25317 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:41.028697   25317 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:41.028700   25317 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:41.028704   25317 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:41.028706   25317 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:41.028708   25317 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:41.028710   25317 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:41.028717   25317 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:41.028722   25317 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:41.028725   25317 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:41.028729   25317 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:41.028733   25317 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:41.028737   25317 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:41.028751   25317 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:41.028759   25317 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:41.028763   25317 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:41.028765   25317 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:41.028768   25317 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:41.028773   25317 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:41.028778   25317 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:41.028780   25317 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:41.028783   25317 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:41.028785   25317 cri.go:89] found id: ""
	I1119 21:49:41.028860   25317 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:41.043724   25317 out.go:203] 
	W1119 21:49:41.045459   25317 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:41.045488   25317 out.go:285] * 
	* 
	W1119 21:49:41.049561   25317 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:41.050898   25317 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-418049 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-418049 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-418049 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [5c6f52b1-d380-4344-8a19-0c3dc79badc2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [5c6f52b1-d380-4344-8a19-0c3dc79badc2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.06865622s
I1119 21:49:47.928980   12829 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.404843744s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-418049 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-418049
helpers_test.go:243: (dbg) docker inspect addons-418049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56",
	        "Created": "2025-11-19T21:47:32.785501192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14834,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T21:47:32.812777816Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56/hostname",
	        "HostsPath": "/var/lib/docker/containers/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56/hosts",
	        "LogPath": "/var/lib/docker/containers/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56-json.log",
	        "Name": "/addons-418049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-418049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-418049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56",
	                "LowerDir": "/var/lib/docker/overlay2/9bb5febb853bf51136be44320b0dbb0859e9b690dd21ae57082ee435562fc7f1-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bb5febb853bf51136be44320b0dbb0859e9b690dd21ae57082ee435562fc7f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bb5febb853bf51136be44320b0dbb0859e9b690dd21ae57082ee435562fc7f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bb5febb853bf51136be44320b0dbb0859e9b690dd21ae57082ee435562fc7f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-418049",
	                "Source": "/var/lib/docker/volumes/addons-418049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-418049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-418049",
	                "name.minikube.sigs.k8s.io": "addons-418049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "76ebb1c02aec5768f4c9b0afad928936fcb43c9871d4dfaa07be51420650a2d9",
	            "SandboxKey": "/var/run/docker/netns/76ebb1c02aec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-418049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3686714f91f96d551d1f231e1e1262ba4f1933bd595b20619b47187081139dc2",
	                    "EndpointID": "4082a882a07fae38d7bf161424b6efe0fcc3608cebc14f63617508651047c376",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ee:fe:9a:5a:68:f0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-418049",
	                        "2587ae0574ec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-418049 -n addons-418049
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-418049 logs -n 25: (1.049768506s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-562747 --alsologtostderr --binary-mirror http://127.0.0.1:39249 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-562747 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ delete  │ -p binary-mirror-562747                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-562747 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ addons  │ disable dashboard -p addons-418049                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ addons  │ enable dashboard -p addons-418049                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ start   │ -p addons-418049 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:49 UTC │
	│ addons  │ addons-418049 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ ssh     │ addons-418049 ssh cat /opt/local-path-provisioner/pvc-507d12fa-be38-43d5-a275-67581d2b4b4d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │ 19 Nov 25 21:49 UTC │
	│ addons  │ addons-418049 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ ip      │ addons-418049 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │ 19 Nov 25 21:49 UTC │
	│ addons  │ addons-418049 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ enable headlamp -p addons-418049 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-418049                                                                                                                                                                                                                                                                                                                                                                                           │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │ 19 Nov 25 21:49 UTC │
	│ addons  │ addons-418049 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ ssh     │ addons-418049 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:50 UTC │                     │
	│ addons  │ addons-418049 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:50 UTC │                     │
	│ ip      │ addons-418049 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-418049        │ jenkins │ v1.37.0 │ 19 Nov 25 21:52 UTC │ 19 Nov 25 21:52 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:09.186037   14179 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:09.186244   14179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:09.186252   14179 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:09.186255   14179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:09.186537   14179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:47:09.187653   14179 out.go:368] Setting JSON to false
	I1119 21:47:09.188545   14179 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1777,"bootTime":1763587052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:47:09.188634   14179 start.go:143] virtualization: kvm guest
	I1119 21:47:09.189989   14179 out.go:179] * [addons-418049] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:47:09.191259   14179 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:47:09.191271   14179 notify.go:221] Checking for updates...
	I1119 21:47:09.193263   14179 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:09.194515   14179 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:47:09.195534   14179 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 21:47:09.196569   14179 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:47:09.197539   14179 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:47:09.198568   14179 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:47:09.221213   14179 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:47:09.221276   14179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:09.275679   14179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 21:47:09.267087618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:09.275774   14179 docker.go:319] overlay module found
	I1119 21:47:09.277301   14179 out.go:179] * Using the docker driver based on user configuration
	I1119 21:47:09.278412   14179 start.go:309] selected driver: docker
	I1119 21:47:09.278424   14179 start.go:930] validating driver "docker" against <nil>
	I1119 21:47:09.278434   14179 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:47:09.279005   14179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:09.332641   14179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 21:47:09.323311203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:09.332840   14179 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:47:09.333087   14179 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:47:09.334388   14179 out.go:179] * Using Docker driver with root privileges
	I1119 21:47:09.335409   14179 cni.go:84] Creating CNI manager for ""
	I1119 21:47:09.335459   14179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:47:09.335470   14179 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:47:09.335517   14179 start.go:353] cluster config:
	{Name:addons-418049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1119 21:47:09.336729   14179 out.go:179] * Starting "addons-418049" primary control-plane node in "addons-418049" cluster
	I1119 21:47:09.337723   14179 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 21:47:09.338608   14179 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:47:09.339431   14179 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:47:09.339469   14179 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 21:47:09.339481   14179 cache.go:65] Caching tarball of preloaded images
	I1119 21:47:09.339514   14179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:47:09.339574   14179 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 21:47:09.339589   14179 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:47:09.339971   14179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/config.json ...
	I1119 21:47:09.340000   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/config.json: {Name:mkd3486f71ee715842f91dc3decfe65edfd45631 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:09.354439   14179 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:09.354545   14179 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:47:09.354560   14179 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory, skipping pull
	I1119 21:47:09.354564   14179 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in cache, skipping pull
	I1119 21:47:09.354573   14179 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	I1119 21:47:09.354578   14179 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 from local cache
	I1119 21:47:21.330767   14179 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 from cached tarball
	I1119 21:47:21.330828   14179 cache.go:243] Successfully downloaded all kic artifacts
	I1119 21:47:21.330882   14179 start.go:360] acquireMachinesLock for addons-418049: {Name:mk275dc52626d848e0f0a8364f95fd04a2a58c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:47:21.330984   14179 start.go:364] duration metric: took 80.484µs to acquireMachinesLock for "addons-418049"
	I1119 21:47:21.331012   14179 start.go:93] Provisioning new machine with config: &{Name:addons-418049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:47:21.331102   14179 start.go:125] createHost starting for "" (driver="docker")
	I1119 21:47:21.332729   14179 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1119 21:47:21.332975   14179 start.go:159] libmachine.API.Create for "addons-418049" (driver="docker")
	I1119 21:47:21.333010   14179 client.go:173] LocalClient.Create starting
	I1119 21:47:21.333120   14179 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem
	I1119 21:47:21.709623   14179 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem
	I1119 21:47:21.919769   14179 cli_runner.go:164] Run: docker network inspect addons-418049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 21:47:21.937062   14179 cli_runner.go:211] docker network inspect addons-418049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 21:47:21.937123   14179 network_create.go:284] running [docker network inspect addons-418049] to gather additional debugging logs...
	I1119 21:47:21.937140   14179 cli_runner.go:164] Run: docker network inspect addons-418049
	W1119 21:47:21.952363   14179 cli_runner.go:211] docker network inspect addons-418049 returned with exit code 1
	I1119 21:47:21.952383   14179 network_create.go:287] error running [docker network inspect addons-418049]: docker network inspect addons-418049: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-418049 not found
	I1119 21:47:21.952394   14179 network_create.go:289] output of [docker network inspect addons-418049]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-418049 not found
	
	** /stderr **
	I1119 21:47:21.952500   14179 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:47:21.967992   14179 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c8add0}
	I1119 21:47:21.968020   14179 network_create.go:124] attempt to create docker network addons-418049 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1119 21:47:21.968064   14179 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-418049 addons-418049
	I1119 21:47:22.009935   14179 network_create.go:108] docker network addons-418049 192.168.49.0/24 created
	I1119 21:47:22.009961   14179 kic.go:121] calculated static IP "192.168.49.2" for the "addons-418049" container
	I1119 21:47:22.010014   14179 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 21:47:22.025652   14179 cli_runner.go:164] Run: docker volume create addons-418049 --label name.minikube.sigs.k8s.io=addons-418049 --label created_by.minikube.sigs.k8s.io=true
	I1119 21:47:22.041555   14179 oci.go:103] Successfully created a docker volume addons-418049
	I1119 21:47:22.041614   14179 cli_runner.go:164] Run: docker run --rm --name addons-418049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418049 --entrypoint /usr/bin/test -v addons-418049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 21:47:28.497581   14179 cli_runner.go:217] Completed: docker run --rm --name addons-418049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418049 --entrypoint /usr/bin/test -v addons-418049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib: (6.455918227s)
	I1119 21:47:28.497615   14179 oci.go:107] Successfully prepared a docker volume addons-418049
	I1119 21:47:28.497660   14179 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:47:28.497671   14179 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 21:47:28.497732   14179 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-418049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 21:47:32.715435   14179 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-418049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.217651808s)
	I1119 21:47:32.715463   14179 kic.go:203] duration metric: took 4.217789117s to extract preloaded images to volume ...
	W1119 21:47:32.715541   14179 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 21:47:32.715575   14179 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 21:47:32.715611   14179 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 21:47:32.770485   14179 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-418049 --name addons-418049 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418049 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-418049 --network addons-418049 --ip 192.168.49.2 --volume addons-418049:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 21:47:33.056172   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Running}}
	I1119 21:47:33.076037   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:33.093354   14179 cli_runner.go:164] Run: docker exec addons-418049 stat /var/lib/dpkg/alternatives/iptables
	I1119 21:47:33.136203   14179 oci.go:144] the created container "addons-418049" has a running status.
	I1119 21:47:33.136230   14179 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa...
	I1119 21:47:33.494470   14179 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 21:47:33.518034   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:33.534554   14179 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 21:47:33.534578   14179 kic_runner.go:114] Args: [docker exec --privileged addons-418049 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 21:47:33.575785   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:33.593505   14179 machine.go:94] provisionDockerMachine start ...
	I1119 21:47:33.593609   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:33.610125   14179 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:33.610395   14179 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 21:47:33.610412   14179 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:47:33.734878   14179 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-418049
	
	I1119 21:47:33.734905   14179 ubuntu.go:182] provisioning hostname "addons-418049"
	I1119 21:47:33.734970   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:33.751898   14179 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:33.752134   14179 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 21:47:33.752150   14179 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-418049 && echo "addons-418049" | sudo tee /etc/hostname
	I1119 21:47:33.880479   14179 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-418049
	
	I1119 21:47:33.880540   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:33.896943   14179 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:33.897158   14179 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 21:47:33.897182   14179 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-418049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-418049/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-418049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:47:34.018465   14179 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:47:34.018492   14179 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 21:47:34.018507   14179 ubuntu.go:190] setting up certificates
	I1119 21:47:34.018517   14179 provision.go:84] configureAuth start
	I1119 21:47:34.018570   14179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418049
	I1119 21:47:34.034304   14179 provision.go:143] copyHostCerts
	I1119 21:47:34.034369   14179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 21:47:34.034474   14179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 21:47:34.034535   14179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 21:47:34.034593   14179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.addons-418049 san=[127.0.0.1 192.168.49.2 addons-418049 localhost minikube]
	I1119 21:47:34.211516   14179 provision.go:177] copyRemoteCerts
	I1119 21:47:34.211571   14179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:47:34.211601   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.227713   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.316735   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 21:47:34.333750   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 21:47:34.348838   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 21:47:34.364214   14179 provision.go:87] duration metric: took 345.687974ms to configureAuth
	I1119 21:47:34.364233   14179 ubuntu.go:206] setting minikube options for container-runtime
	I1119 21:47:34.364378   14179 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:34.364461   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.380455   14179 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:34.380682   14179 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 21:47:34.380706   14179 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:47:34.631614   14179 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:47:34.631634   14179 machine.go:97] duration metric: took 1.03810264s to provisionDockerMachine
	I1119 21:47:34.631644   14179 client.go:176] duration metric: took 13.298624117s to LocalClient.Create
	I1119 21:47:34.631659   14179 start.go:167] duration metric: took 13.298685832s to libmachine.API.Create "addons-418049"
	I1119 21:47:34.631666   14179 start.go:293] postStartSetup for "addons-418049" (driver="docker")
	I1119 21:47:34.631674   14179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:47:34.631722   14179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:47:34.631763   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.648292   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.738083   14179 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:47:34.741110   14179 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 21:47:34.741146   14179 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 21:47:34.741158   14179 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 21:47:34.741206   14179 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 21:47:34.741228   14179 start.go:296] duration metric: took 109.557672ms for postStartSetup
	I1119 21:47:34.741493   14179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418049
	I1119 21:47:34.757955   14179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/config.json ...
	I1119 21:47:34.758184   14179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 21:47:34.758226   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.774912   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.860860   14179 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 21:47:34.864704   14179 start.go:128] duration metric: took 13.533588406s to createHost
	I1119 21:47:34.864727   14179 start.go:83] releasing machines lock for "addons-418049", held for 13.533730301s
	I1119 21:47:34.864783   14179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418049
	I1119 21:47:34.880012   14179 ssh_runner.go:195] Run: cat /version.json
	I1119 21:47:34.880057   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.880093   14179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:47:34.880151   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.897693   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.898103   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.984346   14179 ssh_runner.go:195] Run: systemctl --version
	I1119 21:47:35.038433   14179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:47:35.069322   14179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 21:47:35.073434   14179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:47:35.073485   14179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:47:35.096605   14179 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 21:47:35.096625   14179 start.go:496] detecting cgroup driver to use...
	I1119 21:47:35.096653   14179 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 21:47:35.096695   14179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:47:35.110693   14179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:47:35.121130   14179 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:47:35.121180   14179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:47:35.135283   14179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:47:35.150422   14179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:47:35.229519   14179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:47:35.309677   14179 docker.go:234] disabling docker service ...
	I1119 21:47:35.309725   14179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:47:35.325303   14179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:47:35.335939   14179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:47:35.413423   14179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:47:35.491348   14179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:47:35.502106   14179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:47:35.514167   14179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:47:35.514209   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.522868   14179 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 21:47:35.522904   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.530448   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.537840   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.545341   14179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:47:35.552306   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.559638   14179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.571226   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.578786   14179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:47:35.585167   14179 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 21:47:35.585211   14179 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 21:47:35.595644   14179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:47:35.601913   14179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:35.674590   14179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:47:35.800685   14179 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:47:35.800772   14179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:47:35.804387   14179 start.go:564] Will wait 60s for crictl version
	I1119 21:47:35.804428   14179 ssh_runner.go:195] Run: which crictl
	I1119 21:47:35.807616   14179 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 21:47:35.830290   14179 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 21:47:35.830390   14179 ssh_runner.go:195] Run: crio --version
	I1119 21:47:35.855692   14179 ssh_runner.go:195] Run: crio --version
	I1119 21:47:35.881888   14179 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 21:47:35.883004   14179 cli_runner.go:164] Run: docker network inspect addons-418049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:47:35.900196   14179 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 21:47:35.903731   14179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:47:35.913050   14179 kubeadm.go:884] updating cluster {Name:addons-418049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:47:35.913156   14179 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:47:35.913195   14179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:47:35.941600   14179 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:47:35.941617   14179 crio.go:433] Images already preloaded, skipping extraction
	I1119 21:47:35.941650   14179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:47:35.963758   14179 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:47:35.963778   14179 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:47:35.963788   14179 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1119 21:47:35.963894   14179 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-418049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:47:35.963964   14179 ssh_runner.go:195] Run: crio config
	I1119 21:47:36.004076   14179 cni.go:84] Creating CNI manager for ""
	I1119 21:47:36.004103   14179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:47:36.004121   14179 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:47:36.004142   14179 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-418049 NodeName:addons-418049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:47:36.004252   14179 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-418049"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:47:36.004302   14179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:47:36.011478   14179 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:47:36.011549   14179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:47:36.018356   14179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1119 21:47:36.029402   14179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:47:36.043097   14179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 21:47:36.054470   14179 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 21:47:36.057591   14179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:47:36.066434   14179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:36.145585   14179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:47:36.169638   14179 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049 for IP: 192.168.49.2
	I1119 21:47:36.169658   14179 certs.go:195] generating shared ca certs ...
	I1119 21:47:36.169678   14179 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.169800   14179 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 21:47:36.313386   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt ...
	I1119 21:47:36.313407   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt: {Name:mk8a3ae1f4768e95b44f6ee834507ec0dd5a31b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.313544   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key ...
	I1119 21:47:36.313555   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key: {Name:mk2a77f344d56cbf0fc2983daf73c303614b3719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.313630   14179 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 21:47:36.539844   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt ...
	I1119 21:47:36.539865   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt: {Name:mk9aa5bf719ebb8ef9775762a12faf372326ce52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.539994   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key ...
	I1119 21:47:36.540004   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key: {Name:mke31cde355fc17855364fbd8b78836671f9a958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.540068   14179 certs.go:257] generating profile certs ...
	I1119 21:47:36.540126   14179 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.key
	I1119 21:47:36.540140   14179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt with IP's: []
	I1119 21:47:36.718418   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt ...
	I1119 21:47:36.718440   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: {Name:mk869b270e1b1cb84dd4e9178af439e37d4418c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.718577   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.key ...
	I1119 21:47:36.718587   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.key: {Name:mk780689b286c609054031aaf912087fb5f54ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.718655   14179 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key.c8bea405
	I1119 21:47:36.718672   14179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt.c8bea405 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1119 21:47:36.775557   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt.c8bea405 ...
	I1119 21:47:36.775578   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt.c8bea405: {Name:mkeca76dbf33a82f5728e1ce61f80fc8d83990e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.775694   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key.c8bea405 ...
	I1119 21:47:36.775705   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key.c8bea405: {Name:mkeb950e3fd7103b516eda460865e4ee953f9e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.775777   14179 certs.go:382] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt.c8bea405 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt
	I1119 21:47:36.775867   14179 certs.go:386] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key.c8bea405 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key
	I1119 21:47:36.775915   14179 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.key
	I1119 21:47:36.775930   14179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.crt with IP's: []
	I1119 21:47:37.052088   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.crt ...
	I1119 21:47:37.052113   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.crt: {Name:mke0e1eed3e89beba161b6bb7f058d3ad91ea73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:37.052280   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.key ...
	I1119 21:47:37.052294   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.key: {Name:mkd565d37c5ba991c22773fb6cd174fca1711be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:37.052486   14179 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 21:47:37.052519   14179 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 21:47:37.052542   14179 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:47:37.052564   14179 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 21:47:37.053106   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:47:37.070808   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 21:47:37.086766   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:47:37.102277   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 21:47:37.117692   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 21:47:37.132978   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 21:47:37.148302   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:47:37.163379   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 21:47:37.178314   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:47:37.195346   14179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:47:37.206314   14179 ssh_runner.go:195] Run: openssl version
	I1119 21:47:37.211907   14179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:47:37.221859   14179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:37.225061   14179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:37.225104   14179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:37.258276   14179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:47:37.265507   14179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:47:37.268463   14179 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 21:47:37.268509   14179 kubeadm.go:401] StartCluster: {Name:addons-418049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:47:37.268591   14179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:47:37.268636   14179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:47:37.292769   14179 cri.go:89] found id: ""
	I1119 21:47:37.292809   14179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 21:47:37.299702   14179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 21:47:37.306655   14179 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 21:47:37.306690   14179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 21:47:37.313380   14179 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 21:47:37.313393   14179 kubeadm.go:158] found existing configuration files:
	
	I1119 21:47:37.313418   14179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 21:47:37.319916   14179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 21:47:37.319947   14179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 21:47:37.326196   14179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 21:47:37.332788   14179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 21:47:37.332836   14179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 21:47:37.339086   14179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 21:47:37.345558   14179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 21:47:37.345589   14179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 21:47:37.351956   14179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 21:47:37.358402   14179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 21:47:37.358434   14179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 21:47:37.364842   14179 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 21:47:37.416101   14179 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 21:47:37.467449   14179 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 21:47:46.089232   14179 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 21:47:46.089315   14179 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 21:47:46.089385   14179 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 21:47:46.089430   14179 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 21:47:46.089459   14179 kubeadm.go:319] OS: Linux
	I1119 21:47:46.089496   14179 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 21:47:46.089535   14179 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 21:47:46.089635   14179 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 21:47:46.089708   14179 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 21:47:46.089755   14179 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 21:47:46.089796   14179 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 21:47:46.089853   14179 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 21:47:46.089891   14179 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 21:47:46.089961   14179 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 21:47:46.090053   14179 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 21:47:46.090187   14179 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 21:47:46.090288   14179 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 21:47:46.091710   14179 out.go:252]   - Generating certificates and keys ...
	I1119 21:47:46.091791   14179 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 21:47:46.091902   14179 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 21:47:46.091998   14179 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 21:47:46.092076   14179 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 21:47:46.092129   14179 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 21:47:46.092172   14179 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 21:47:46.092216   14179 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 21:47:46.092313   14179 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-418049 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 21:47:46.092356   14179 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 21:47:46.092450   14179 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-418049 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 21:47:46.092506   14179 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 21:47:46.092556   14179 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 21:47:46.092594   14179 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 21:47:46.092680   14179 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 21:47:46.092762   14179 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 21:47:46.092876   14179 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 21:47:46.092951   14179 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 21:47:46.093047   14179 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 21:47:46.093124   14179 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 21:47:46.093225   14179 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 21:47:46.093329   14179 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 21:47:46.094605   14179 out.go:252]   - Booting up control plane ...
	I1119 21:47:46.094680   14179 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 21:47:46.094760   14179 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 21:47:46.094855   14179 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 21:47:46.094994   14179 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 21:47:46.095097   14179 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 21:47:46.095226   14179 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 21:47:46.095396   14179 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 21:47:46.095451   14179 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 21:47:46.095565   14179 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 21:47:46.095648   14179 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 21:47:46.095715   14179 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.562793ms
	I1119 21:47:46.095829   14179 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 21:47:46.095943   14179 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1119 21:47:46.096016   14179 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 21:47:46.096103   14179 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 21:47:46.096219   14179 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.091629085s
	I1119 21:47:46.096284   14179 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.111984653s
	I1119 21:47:46.096398   14179 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.500894283s
	I1119 21:47:46.096502   14179 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 21:47:46.096608   14179 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 21:47:46.096658   14179 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 21:47:46.096910   14179 kubeadm.go:319] [mark-control-plane] Marking the node addons-418049 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 21:47:46.097001   14179 kubeadm.go:319] [bootstrap-token] Using token: rnz4hq.hop80trcclzl6sbi
	I1119 21:47:46.098350   14179 out.go:252]   - Configuring RBAC rules ...
	I1119 21:47:46.098464   14179 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 21:47:46.098573   14179 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 21:47:46.098726   14179 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 21:47:46.098859   14179 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 21:47:46.098963   14179 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 21:47:46.099041   14179 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 21:47:46.099142   14179 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 21:47:46.099186   14179 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 21:47:46.099242   14179 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 21:47:46.099251   14179 kubeadm.go:319] 
	I1119 21:47:46.099310   14179 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 21:47:46.099319   14179 kubeadm.go:319] 
	I1119 21:47:46.099416   14179 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 21:47:46.099426   14179 kubeadm.go:319] 
	I1119 21:47:46.099468   14179 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 21:47:46.099540   14179 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 21:47:46.099591   14179 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 21:47:46.099597   14179 kubeadm.go:319] 
	I1119 21:47:46.099642   14179 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 21:47:46.099648   14179 kubeadm.go:319] 
	I1119 21:47:46.099692   14179 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 21:47:46.099698   14179 kubeadm.go:319] 
	I1119 21:47:46.099763   14179 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 21:47:46.099873   14179 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 21:47:46.099969   14179 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 21:47:46.099978   14179 kubeadm.go:319] 
	I1119 21:47:46.100097   14179 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 21:47:46.100196   14179 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 21:47:46.100203   14179 kubeadm.go:319] 
	I1119 21:47:46.100292   14179 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rnz4hq.hop80trcclzl6sbi \
	I1119 21:47:46.100430   14179 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b \
	I1119 21:47:46.100551   14179 kubeadm.go:319] 	--control-plane 
	I1119 21:47:46.100564   14179 kubeadm.go:319] 
	I1119 21:47:46.100673   14179 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 21:47:46.100680   14179 kubeadm.go:319] 
	I1119 21:47:46.100751   14179 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rnz4hq.hop80trcclzl6sbi \
	I1119 21:47:46.100870   14179 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b 
	I1119 21:47:46.100881   14179 cni.go:84] Creating CNI manager for ""
	I1119 21:47:46.100891   14179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:47:46.102183   14179 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 21:47:46.103242   14179 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 21:47:46.107193   14179 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 21:47:46.107206   14179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 21:47:46.119501   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 21:47:46.304890   14179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 21:47:46.304934   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:46.304971   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-418049 minikube.k8s.io/updated_at=2025_11_19T21_47_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=addons-418049 minikube.k8s.io/primary=true
	I1119 21:47:46.314516   14179 ops.go:34] apiserver oom_adj: -16
	I1119 21:47:46.375125   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:46.875341   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:47.375428   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:47.875777   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:48.375874   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:48.876061   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:49.375533   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:49.875359   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:50.375741   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:50.875277   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:51.375767   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:51.433199   14179 kubeadm.go:1114] duration metric: took 5.128303504s to wait for elevateKubeSystemPrivileges
	I1119 21:47:51.433237   14179 kubeadm.go:403] duration metric: took 14.164733011s to StartCluster
	I1119 21:47:51.433258   14179 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:51.433369   14179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:47:51.433737   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:51.433938   14179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 21:47:51.433966   14179 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:47:51.434038   14179 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 21:47:51.434140   14179 addons.go:70] Setting yakd=true in profile "addons-418049"
	I1119 21:47:51.434147   14179 addons.go:70] Setting inspektor-gadget=true in profile "addons-418049"
	I1119 21:47:51.434165   14179 addons.go:239] Setting addon inspektor-gadget=true in "addons-418049"
	I1119 21:47:51.434171   14179 addons.go:70] Setting storage-provisioner=true in profile "addons-418049"
	I1119 21:47:51.434183   14179 addons.go:239] Setting addon storage-provisioner=true in "addons-418049"
	I1119 21:47:51.434201   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434201   14179 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:51.434199   14179 addons.go:70] Setting volcano=true in profile "addons-418049"
	I1119 21:47:51.434222   14179 addons.go:70] Setting volumesnapshots=true in profile "addons-418049"
	I1119 21:47:51.434235   14179 addons.go:239] Setting addon volumesnapshots=true in "addons-418049"
	I1119 21:47:51.434218   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434250   14179 addons.go:70] Setting default-storageclass=true in profile "addons-418049"
	I1119 21:47:51.434257   14179 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-418049"
	I1119 21:47:51.434268   14179 addons.go:70] Setting cloud-spanner=true in profile "addons-418049"
	I1119 21:47:51.434273   14179 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-418049"
	I1119 21:47:51.434278   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434280   14179 addons.go:70] Setting registry=true in profile "addons-418049"
	I1119 21:47:51.434290   14179 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-418049"
	I1119 21:47:51.434297   14179 addons.go:239] Setting addon registry=true in "addons-418049"
	I1119 21:47:51.434302   14179 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-418049"
	I1119 21:47:51.434329   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434339   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434598   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434740   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434777   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434789   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434802   14179 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-418049"
	I1119 21:47:51.434805   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434860   14179 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-418049"
	I1119 21:47:51.434251   14179 addons.go:70] Setting metrics-server=true in profile "addons-418049"
	I1119 21:47:51.434933   14179 addons.go:70] Setting ingress=true in profile "addons-418049"
	I1119 21:47:51.434998   14179 addons.go:239] Setting addon ingress=true in "addons-418049"
	I1119 21:47:51.435065   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434789   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434276   14179 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-418049"
	I1119 21:47:51.435359   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.435728   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.435838   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.436039   14179 out.go:179] * Verifying Kubernetes components...
	I1119 21:47:51.434165   14179 addons.go:239] Setting addon yakd=true in "addons-418049"
	I1119 21:47:51.436110   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.436576   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434199   14179 addons.go:70] Setting registry-creds=true in profile "addons-418049"
	I1119 21:47:51.437871   14179 addons.go:239] Setting addon registry-creds=true in "addons-418049"
	I1119 21:47:51.437899   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.438368   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434952   14179 addons.go:239] Setting addon metrics-server=true in "addons-418049"
	I1119 21:47:51.442503   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434241   14179 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-418049"
	I1119 21:47:51.442952   14179 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-418049"
	I1119 21:47:51.443022   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434286   14179 addons.go:239] Setting addon cloud-spanner=true in "addons-418049"
	I1119 21:47:51.434895   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.443089   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.443254   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434236   14179 addons.go:239] Setting addon volcano=true in "addons-418049"
	I1119 21:47:51.444495   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.444846   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.444976   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.443346   14179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:51.434971   14179 addons.go:70] Setting ingress-dns=true in profile "addons-418049"
	I1119 21:47:51.445278   14179 addons.go:239] Setting addon ingress-dns=true in "addons-418049"
	I1119 21:47:51.445322   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.445748   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434961   14179 addons.go:70] Setting gcp-auth=true in profile "addons-418049"
	I1119 21:47:51.446087   14179 mustload.go:66] Loading cluster: addons-418049
	I1119 21:47:51.444089   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.446291   14179 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:51.446542   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.476652   14179 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 21:47:51.479384   14179 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:47:51.479409   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 21:47:51.479471   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.486911   14179 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 21:47:51.488753   14179 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 21:47:51.493921   14179 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 21:47:51.493951   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 21:47:51.494031   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.497561   14179 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 21:47:51.499209   14179 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 21:47:51.499228   14179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 21:47:51.499298   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.512276   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 21:47:51.514631   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 21:47:51.514690   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 21:47:51.515923   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 21:47:51.515964   14179 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 21:47:51.516035   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.516974   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 21:47:51.518688   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 21:47:51.520780   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 21:47:51.522744   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 21:47:51.523506   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.529874   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 21:47:51.531311   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W1119 21:47:51.536366   14179 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 21:47:51.536651   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 21:47:51.537988   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 21:47:51.538079   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.536907   14179 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 21:47:51.536937   14179 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 21:47:51.539624   14179 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 21:47:51.539646   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 21:47:51.539692   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.539843   14179 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 21:47:51.539892   14179 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:47:51.539906   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 21:47:51.539960   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.542836   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 21:47:51.542853   14179 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 21:47:51.542914   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.544992   14179 addons.go:239] Setting addon default-storageclass=true in "addons-418049"
	I1119 21:47:51.545038   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.545523   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.546597   14179 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 21:47:51.548132   14179 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:47:51.548152   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 21:47:51.548197   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.553990   14179 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 21:47:51.557105   14179 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:47:51.557129   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 21:47:51.557180   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.558008   14179 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-418049"
	I1119 21:47:51.558048   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.558545   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.558763   14179 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 21:47:51.560458   14179 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:47:51.560482   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 21:47:51.560531   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.561753   14179 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 21:47:51.563156   14179 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:47:51.563543   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 21:47:51.563708   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.568325   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.572919   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.573399   14179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 21:47:51.574837   14179 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 21:47:51.578840   14179 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:47:51.584779   14179 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:47:51.591307   14179 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:47:51.591331   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 21:47:51.591389   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.615251   14179 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 21:47:51.617926   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.620223   14179 out.go:179]   - Using image docker.io/busybox:stable
	I1119 21:47:51.621736   14179 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:47:51.621780   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 21:47:51.621859   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.636123   14179 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 21:47:51.636145   14179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 21:47:51.636167   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.636205   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.636116   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.637467   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.638958   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.640986   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.641209   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.646124   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.646292   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.649313   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.651999   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	W1119 21:47:51.654093   14179 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1119 21:47:51.654141   14179 retry.go:31] will retry after 233.822989ms: ssh: handshake failed: EOF
	I1119 21:47:51.663633   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.687989   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	W1119 21:47:51.689021   14179 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1119 21:47:51.689043   14179 retry.go:31] will retry after 194.998612ms: ssh: handshake failed: EOF
	I1119 21:47:51.698422   14179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:47:51.749970   14179 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 21:47:51.749994   14179 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 21:47:51.763610   14179 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 21:47:51.763637   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 21:47:51.771494   14179 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:47:51.771516   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 21:47:51.776124   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:47:51.780015   14179 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 21:47:51.780033   14179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 21:47:51.787318   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:47:51.797225   14179 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:47:51.797245   14179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 21:47:51.817279   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 21:47:51.817304   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 21:47:51.828824   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:47:51.829168   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:47:51.831850   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 21:47:51.842011   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:47:51.842319   14179 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 21:47:51.842368   14179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 21:47:51.842857   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:47:51.842988   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:47:51.843929   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 21:47:51.843985   14179 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 21:47:51.850772   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:47:51.851282   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:47:51.863090   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 21:47:51.863161   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 21:47:51.878465   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 21:47:51.878489   14179 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 21:47:51.896442   14179 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 21:47:51.896467   14179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 21:47:51.903780   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 21:47:51.903803   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 21:47:51.915790   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 21:47:51.915824   14179 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 21:47:51.934062   14179 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 21:47:51.934098   14179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 21:47:51.934756   14179 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1119 21:47:51.935916   14179 node_ready.go:35] waiting up to 6m0s for node "addons-418049" to be "Ready" ...
	I1119 21:47:51.961578   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 21:47:51.961607   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 21:47:51.964859   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:47:51.964879   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 21:47:51.995699   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 21:47:51.995726   14179 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 21:47:52.002371   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 21:47:52.002395   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 21:47:52.022319   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:47:52.055461   14179 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:47:52.055486   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 21:47:52.069329   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 21:47:52.069349   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 21:47:52.085134   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 21:47:52.108552   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:47:52.126805   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 21:47:52.126843   14179 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 21:47:52.149192   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:47:52.175650   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 21:47:52.175739   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 21:47:52.243607   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 21:47:52.243632   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 21:47:52.275580   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 21:47:52.275773   14179 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 21:47:52.313952   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 21:47:52.457026   14179 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-418049" context rescaled to 1 replicas
	I1119 21:47:52.480007   14179 addons.go:480] Verifying addon registry=true in "addons-418049"
	I1119 21:47:52.482063   14179 out.go:179] * Verifying registry addon...
	I1119 21:47:52.483901   14179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 21:47:52.487582   14179 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:47:52.487645   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:52.699570   14179 addons.go:480] Verifying addon metrics-server=true in "addons-418049"
	I1119 21:47:52.921245   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.079191315s)
	I1119 21:47:52.921276   14179 addons.go:480] Verifying addon ingress=true in "addons-418049"
	I1119 21:47:52.921314   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.078429177s)
	I1119 21:47:52.921427   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.078421968s)
	I1119 21:47:52.921556   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070659647s)
	I1119 21:47:52.921565   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.070261817s)
	I1119 21:47:52.922831   14179 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-418049 service yakd-dashboard -n yakd-dashboard
	
	I1119 21:47:52.922973   14179 out.go:179] * Verifying ingress addon...
	I1119 21:47:52.924687   14179 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 21:47:52.926868   14179 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 21:47:53.027173   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:53.399515   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.290920232s)
	W1119 21:47:53.399576   14179 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:47:53.399607   14179 retry.go:31] will retry after 170.940812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:47:53.399605   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.250328716s)
	I1119 21:47:53.399845   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.085834597s)
	I1119 21:47:53.399878   14179 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-418049"
	I1119 21:47:53.402890   14179 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 21:47:53.405230   14179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 21:47:53.407348   14179 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:47:53.407368   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:53.427623   14179 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 21:47:53.427640   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:53.485616   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:53.570963   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:47:53.907721   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:53.927442   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:47:53.938110   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:47:54.008040   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:54.408225   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:54.427036   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:54.486264   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:54.908343   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:54.927273   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:55.008380   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:55.407833   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:55.427771   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:55.485877   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:55.908348   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:55.926800   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:47:55.938588   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:47:56.008623   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:56.010352   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.439352015s)
	I1119 21:47:56.407363   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:56.427008   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:56.486010   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:56.908281   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:56.927008   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:57.008719   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:57.407429   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:57.427345   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:57.486669   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:57.907546   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:57.927600   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:58.008011   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:58.408274   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:58.427039   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:47:58.437554   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:47:58.486118   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:58.908208   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:58.927225   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:59.009113   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:59.139671   14179 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 21:47:59.139726   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:59.157301   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:59.251570   14179 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 21:47:59.263203   14179 addons.go:239] Setting addon gcp-auth=true in "addons-418049"
	I1119 21:47:59.263241   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:59.263582   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:59.280884   14179 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 21:47:59.280922   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:59.296834   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:59.384932   14179 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:47:59.386054   14179 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 21:47:59.387034   14179 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 21:47:59.387046   14179 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 21:47:59.398716   14179 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 21:47:59.398732   14179 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 21:47:59.408448   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:59.410822   14179 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:47:59.410840   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 21:47:59.422146   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:47:59.427127   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:59.487157   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:59.698250   14179 addons.go:480] Verifying addon gcp-auth=true in "addons-418049"
	I1119 21:47:59.699514   14179 out.go:179] * Verifying gcp-auth addon...
	I1119 21:47:59.701312   14179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 21:47:59.703453   14179 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 21:47:59.703471   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:47:59.907626   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:59.928045   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:59.986369   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:00.203334   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:00.407468   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:00.427402   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:00.437953   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:00.486455   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:00.703993   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:00.908152   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:00.927073   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:00.986250   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:01.203233   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:01.407489   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:01.427422   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:01.485573   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:01.703938   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:01.908043   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:01.926971   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:01.986404   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:02.203515   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:02.407956   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:02.426763   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:02.438207   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:02.485681   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:02.703939   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:02.908191   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:02.927268   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:02.985613   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:03.203987   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:03.407946   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:03.426888   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:03.486090   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:03.704060   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:03.908137   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:03.926988   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:03.986627   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:04.203809   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:04.408039   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:04.426906   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:04.438320   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:04.485998   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:04.705405   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:04.907787   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:04.927426   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:04.985758   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:05.204177   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:05.408285   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:05.427039   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:05.486157   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:05.704752   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:05.907894   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:05.926618   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:05.985995   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:06.204126   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:06.408544   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:06.427290   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:06.486527   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:06.703578   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:06.907974   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:06.926874   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:06.938461   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:06.986040   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:07.204440   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:07.407750   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:07.427774   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:07.485970   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:07.704195   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:07.907279   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:07.927103   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:07.986356   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:08.203197   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:08.407323   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:08.427108   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:08.486191   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:08.703499   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:08.908187   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:08.927089   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:08.986253   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:09.203568   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:09.407904   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:09.426744   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:09.438292   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:09.485876   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:09.703974   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:09.908179   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:09.927098   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:09.986444   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:10.203852   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:10.408564   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:10.427606   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:10.486654   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:10.704017   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:10.908579   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:10.927924   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:10.986407   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:11.203841   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:11.408139   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:11.427098   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:11.486346   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:11.703732   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:11.908055   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:11.926974   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:11.938720   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:11.986322   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:12.203400   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:12.407690   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:12.427495   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:12.485781   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:12.703797   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:12.908022   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:12.926884   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:12.986145   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:13.204211   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:13.407565   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:13.427917   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:13.486094   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:13.704363   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:13.907347   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:13.927123   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:13.986789   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:14.204012   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:14.408496   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:14.427294   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:14.438059   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:14.486474   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:14.703491   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:14.908130   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:14.926897   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:14.985932   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:15.204339   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:15.407503   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:15.427378   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:15.485544   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:15.703619   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:15.907882   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:15.926515   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:15.985977   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:16.204239   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:16.407684   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:16.427525   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:16.438091   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:16.485758   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:16.703854   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:16.907947   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:16.926618   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:16.985730   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:17.203906   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:17.408007   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:17.426862   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:17.486187   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:17.704355   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:17.907402   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:17.927110   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:17.986093   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:18.204209   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:18.407158   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:18.427003   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:18.486124   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:18.704091   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:18.908161   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:18.927201   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:18.937835   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:18.986566   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:19.203753   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:19.407672   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:19.427506   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:19.485730   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:19.703705   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:19.907988   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:19.926744   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:19.985857   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:20.203939   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:20.407991   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:20.426769   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:20.485863   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:20.703868   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:20.907915   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:20.926823   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:20.938449   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:20.986060   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:21.204003   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:21.408117   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:21.427030   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:21.486208   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:21.703322   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:21.907249   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:21.926978   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:21.986042   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:22.204583   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:22.407797   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:22.426773   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:22.486279   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:22.703328   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:22.907309   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:22.927230   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:22.986356   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:23.203281   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:23.407251   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:23.427168   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:23.437593   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:23.486159   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:23.703978   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:23.908100   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:23.926918   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:23.986221   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:24.203418   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:24.408014   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:24.426850   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:24.486263   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:24.703465   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:24.907778   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:24.927762   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:24.985927   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:25.204013   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:25.408261   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:25.427204   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:25.438063   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:25.485576   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:25.703694   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:25.907915   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:25.926630   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:25.985731   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:26.203788   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:26.407893   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:26.426715   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:26.485966   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:26.704287   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:26.907563   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:26.927445   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:26.985775   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:27.203994   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:27.408197   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:27.426972   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:27.438493   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:27.486054   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:27.704149   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:27.907455   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:27.927356   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:27.986495   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:28.203510   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:28.407402   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:28.427198   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:28.486394   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:28.703461   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:28.907642   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:28.927595   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:28.985902   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:29.204258   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:29.407397   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:29.427327   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:29.485674   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:29.703830   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:29.907962   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:29.926910   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:29.938592   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:29.986233   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:30.203494   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:30.407586   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:30.427424   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:30.485719   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:30.703739   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:30.907990   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:30.926789   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:30.986145   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:31.204322   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:31.407378   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:31.427237   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:31.486510   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:31.703735   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:31.907747   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:31.928117   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:31.986214   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:32.203122   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:32.408620   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:32.427378   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:32.438118   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:32.485839   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:32.703921   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:32.908298   14179 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:48:32.908326   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:32.927297   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:32.937774   14179 node_ready.go:49] node "addons-418049" is "Ready"
	I1119 21:48:32.937793   14179 node_ready.go:38] duration metric: took 41.00185866s for node "addons-418049" to be "Ready" ...
	I1119 21:48:32.937807   14179 api_server.go:52] waiting for apiserver process to appear ...
	I1119 21:48:32.937871   14179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:48:32.953495   14179 api_server.go:72] duration metric: took 41.519496566s to wait for apiserver process to appear ...
	I1119 21:48:32.953520   14179 api_server.go:88] waiting for apiserver healthz status ...
	I1119 21:48:32.953541   14179 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1119 21:48:32.957992   14179 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1119 21:48:32.958879   14179 api_server.go:141] control plane version: v1.34.1
	I1119 21:48:32.958907   14179 api_server.go:131] duration metric: took 5.379808ms to wait for apiserver health ...
	I1119 21:48:32.958918   14179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 21:48:32.962162   14179 system_pods.go:59] 20 kube-system pods found
	I1119 21:48:32.962188   14179 system_pods.go:61] "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:32.962195   14179 system_pods.go:61] "coredns-66bc5c9577-7v6rp" [025a1fd1-54b8-4c2a-9396-a314e9e9ce42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:32.962202   14179 system_pods.go:61] "csi-hostpath-attacher-0" [5cb09132-5e1a-4574-a6a5-51703af7e782] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:32.962207   14179 system_pods.go:61] "csi-hostpath-resizer-0" [65e11f74-ff7f-4f74-97a3-38ad91e62f43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:32.962213   14179 system_pods.go:61] "csi-hostpathplugin-2mv8p" [afa72be6-aaa8-49bb-be6d-f69f6ab56d62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:32.962217   14179 system_pods.go:61] "etcd-addons-418049" [d62d961a-4ff4-4725-8117-7980a03c4db6] Running
	I1119 21:48:32.962221   14179 system_pods.go:61] "kindnet-52bj8" [2737a8dd-e93f-431c-be31-f1b22dce9519] Running
	I1119 21:48:32.962225   14179 system_pods.go:61] "kube-apiserver-addons-418049" [f7b37bb1-9c05-41f7-b76d-fa279ef5e122] Running
	I1119 21:48:32.962228   14179 system_pods.go:61] "kube-controller-manager-addons-418049" [5b61c8e6-3136-4a52-b7cd-be38ac892b5f] Running
	I1119 21:48:32.962233   14179 system_pods.go:61] "kube-ingress-dns-minikube" [ce2e729e-662a-477b-a9ff-ee58569a350d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:32.962239   14179 system_pods.go:61] "kube-proxy-8rrhm" [ea9ea337-5c88-4577-b868-82aaa0234723] Running
	I1119 21:48:32.962242   14179 system_pods.go:61] "kube-scheduler-addons-418049" [0006366e-06a4-402a-a25b-dab34f284544] Running
	I1119 21:48:32.962247   14179 system_pods.go:61] "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:32.962254   14179 system_pods.go:61] "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:32.962262   14179 system_pods.go:61] "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:32.962267   14179 system_pods.go:61] "registry-creds-764b6fb674-j5lrp" [eefdd28e-9cfa-4e4a-8c18-ecececdc9c06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:32.962272   14179 system_pods.go:61] "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:32.962279   14179 system_pods.go:61] "snapshot-controller-7d9fbc56b8-knb29" [1f62ae02-aa23-4fae-b7dd-44e2385bce74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:32.962283   14179 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rcvt9" [718fcfab-fbf5-46f9-84ed-a5b10d561277] Pending
	I1119 21:48:32.962288   14179 system_pods.go:61] "storage-provisioner" [147f8cf8-e9ef-4b12-afd8-1fbb995db186] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:48:32.962292   14179 system_pods.go:74] duration metric: took 3.369733ms to wait for pod list to return data ...
	I1119 21:48:32.962300   14179 default_sa.go:34] waiting for default service account to be created ...
	I1119 21:48:32.964060   14179 default_sa.go:45] found service account: "default"
	I1119 21:48:32.964079   14179 default_sa.go:55] duration metric: took 1.773697ms for default service account to be created ...
	I1119 21:48:32.964088   14179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 21:48:32.967168   14179 system_pods.go:86] 20 kube-system pods found
	I1119 21:48:32.967197   14179 system_pods.go:89] "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:32.967208   14179 system_pods.go:89] "coredns-66bc5c9577-7v6rp" [025a1fd1-54b8-4c2a-9396-a314e9e9ce42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:32.967220   14179 system_pods.go:89] "csi-hostpath-attacher-0" [5cb09132-5e1a-4574-a6a5-51703af7e782] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:32.967230   14179 system_pods.go:89] "csi-hostpath-resizer-0" [65e11f74-ff7f-4f74-97a3-38ad91e62f43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:32.967243   14179 system_pods.go:89] "csi-hostpathplugin-2mv8p" [afa72be6-aaa8-49bb-be6d-f69f6ab56d62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:32.967251   14179 system_pods.go:89] "etcd-addons-418049" [d62d961a-4ff4-4725-8117-7980a03c4db6] Running
	I1119 21:48:32.967258   14179 system_pods.go:89] "kindnet-52bj8" [2737a8dd-e93f-431c-be31-f1b22dce9519] Running
	I1119 21:48:32.967267   14179 system_pods.go:89] "kube-apiserver-addons-418049" [f7b37bb1-9c05-41f7-b76d-fa279ef5e122] Running
	I1119 21:48:32.967272   14179 system_pods.go:89] "kube-controller-manager-addons-418049" [5b61c8e6-3136-4a52-b7cd-be38ac892b5f] Running
	I1119 21:48:32.967287   14179 system_pods.go:89] "kube-ingress-dns-minikube" [ce2e729e-662a-477b-a9ff-ee58569a350d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:32.967292   14179 system_pods.go:89] "kube-proxy-8rrhm" [ea9ea337-5c88-4577-b868-82aaa0234723] Running
	I1119 21:48:32.967298   14179 system_pods.go:89] "kube-scheduler-addons-418049" [0006366e-06a4-402a-a25b-dab34f284544] Running
	I1119 21:48:32.967305   14179 system_pods.go:89] "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:32.967313   14179 system_pods.go:89] "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:32.967322   14179 system_pods.go:89] "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:32.967330   14179 system_pods.go:89] "registry-creds-764b6fb674-j5lrp" [eefdd28e-9cfa-4e4a-8c18-ecececdc9c06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:32.967340   14179 system_pods.go:89] "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:32.967348   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knb29" [1f62ae02-aa23-4fae-b7dd-44e2385bce74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:32.967356   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rcvt9" [718fcfab-fbf5-46f9-84ed-a5b10d561277] Pending
	I1119 21:48:32.967364   14179 system_pods.go:89] "storage-provisioner" [147f8cf8-e9ef-4b12-afd8-1fbb995db186] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:48:32.967383   14179 retry.go:31] will retry after 196.651691ms: missing components: kube-dns
	I1119 21:48:32.987371   14179 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:48:32.987391   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:33.173708   14179 system_pods.go:86] 20 kube-system pods found
	I1119 21:48:33.173747   14179 system_pods.go:89] "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:33.173757   14179 system_pods.go:89] "coredns-66bc5c9577-7v6rp" [025a1fd1-54b8-4c2a-9396-a314e9e9ce42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:33.173766   14179 system_pods.go:89] "csi-hostpath-attacher-0" [5cb09132-5e1a-4574-a6a5-51703af7e782] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:33.173773   14179 system_pods.go:89] "csi-hostpath-resizer-0" [65e11f74-ff7f-4f74-97a3-38ad91e62f43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:33.173782   14179 system_pods.go:89] "csi-hostpathplugin-2mv8p" [afa72be6-aaa8-49bb-be6d-f69f6ab56d62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:33.173787   14179 system_pods.go:89] "etcd-addons-418049" [d62d961a-4ff4-4725-8117-7980a03c4db6] Running
	I1119 21:48:33.173793   14179 system_pods.go:89] "kindnet-52bj8" [2737a8dd-e93f-431c-be31-f1b22dce9519] Running
	I1119 21:48:33.173798   14179 system_pods.go:89] "kube-apiserver-addons-418049" [f7b37bb1-9c05-41f7-b76d-fa279ef5e122] Running
	I1119 21:48:33.173804   14179 system_pods.go:89] "kube-controller-manager-addons-418049" [5b61c8e6-3136-4a52-b7cd-be38ac892b5f] Running
	I1119 21:48:33.173831   14179 system_pods.go:89] "kube-ingress-dns-minikube" [ce2e729e-662a-477b-a9ff-ee58569a350d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:33.173838   14179 system_pods.go:89] "kube-proxy-8rrhm" [ea9ea337-5c88-4577-b868-82aaa0234723] Running
	I1119 21:48:33.173844   14179 system_pods.go:89] "kube-scheduler-addons-418049" [0006366e-06a4-402a-a25b-dab34f284544] Running
	I1119 21:48:33.173851   14179 system_pods.go:89] "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:33.173859   14179 system_pods.go:89] "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:33.173869   14179 system_pods.go:89] "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:33.173930   14179 system_pods.go:89] "registry-creds-764b6fb674-j5lrp" [eefdd28e-9cfa-4e4a-8c18-ecececdc9c06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:33.173947   14179 system_pods.go:89] "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:33.173955   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knb29" [1f62ae02-aa23-4fae-b7dd-44e2385bce74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:33.173964   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rcvt9" [718fcfab-fbf5-46f9-84ed-a5b10d561277] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:33.173971   14179 system_pods.go:89] "storage-provisioner" [147f8cf8-e9ef-4b12-afd8-1fbb995db186] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:48:33.173991   14179 retry.go:31] will retry after 371.854576ms: missing components: kube-dns
	I1119 21:48:33.270020   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:33.408919   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:33.427895   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:33.508730   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:33.549486   14179 system_pods.go:86] 20 kube-system pods found
	I1119 21:48:33.549515   14179 system_pods.go:89] "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:33.549521   14179 system_pods.go:89] "coredns-66bc5c9577-7v6rp" [025a1fd1-54b8-4c2a-9396-a314e9e9ce42] Running
	I1119 21:48:33.549528   14179 system_pods.go:89] "csi-hostpath-attacher-0" [5cb09132-5e1a-4574-a6a5-51703af7e782] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:33.549533   14179 system_pods.go:89] "csi-hostpath-resizer-0" [65e11f74-ff7f-4f74-97a3-38ad91e62f43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:33.549540   14179 system_pods.go:89] "csi-hostpathplugin-2mv8p" [afa72be6-aaa8-49bb-be6d-f69f6ab56d62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:33.549545   14179 system_pods.go:89] "etcd-addons-418049" [d62d961a-4ff4-4725-8117-7980a03c4db6] Running
	I1119 21:48:33.549549   14179 system_pods.go:89] "kindnet-52bj8" [2737a8dd-e93f-431c-be31-f1b22dce9519] Running
	I1119 21:48:33.549553   14179 system_pods.go:89] "kube-apiserver-addons-418049" [f7b37bb1-9c05-41f7-b76d-fa279ef5e122] Running
	I1119 21:48:33.549556   14179 system_pods.go:89] "kube-controller-manager-addons-418049" [5b61c8e6-3136-4a52-b7cd-be38ac892b5f] Running
	I1119 21:48:33.549564   14179 system_pods.go:89] "kube-ingress-dns-minikube" [ce2e729e-662a-477b-a9ff-ee58569a350d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:33.549568   14179 system_pods.go:89] "kube-proxy-8rrhm" [ea9ea337-5c88-4577-b868-82aaa0234723] Running
	I1119 21:48:33.549572   14179 system_pods.go:89] "kube-scheduler-addons-418049" [0006366e-06a4-402a-a25b-dab34f284544] Running
	I1119 21:48:33.549581   14179 system_pods.go:89] "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:33.549586   14179 system_pods.go:89] "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:33.549592   14179 system_pods.go:89] "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:33.549597   14179 system_pods.go:89] "registry-creds-764b6fb674-j5lrp" [eefdd28e-9cfa-4e4a-8c18-ecececdc9c06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:33.549603   14179 system_pods.go:89] "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:33.549607   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knb29" [1f62ae02-aa23-4fae-b7dd-44e2385bce74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:33.549615   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rcvt9" [718fcfab-fbf5-46f9-84ed-a5b10d561277] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:33.549619   14179 system_pods.go:89] "storage-provisioner" [147f8cf8-e9ef-4b12-afd8-1fbb995db186] Running
	I1119 21:48:33.549626   14179 system_pods.go:126] duration metric: took 585.532417ms to wait for k8s-apps to be running ...
	I1119 21:48:33.549635   14179 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 21:48:33.549671   14179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 21:48:33.562196   14179 system_svc.go:56] duration metric: took 12.554401ms WaitForService to wait for kubelet
	I1119 21:48:33.562225   14179 kubeadm.go:587] duration metric: took 42.128229298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:48:33.562248   14179 node_conditions.go:102] verifying NodePressure condition ...
	I1119 21:48:33.564114   14179 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 21:48:33.564136   14179 node_conditions.go:123] node cpu capacity is 8
	I1119 21:48:33.564148   14179 node_conditions.go:105] duration metric: took 1.895747ms to run NodePressure ...
	I1119 21:48:33.564159   14179 start.go:242] waiting for startup goroutines ...
	I1119 21:48:33.704418   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:33.908257   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:33.927977   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:33.989207   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:34.204676   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:34.408710   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:34.428273   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:34.486791   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:34.704617   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:34.909062   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:34.928163   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:34.986125   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:35.205170   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:35.410148   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:35.427570   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:35.487144   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:35.704966   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:35.908888   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:35.927237   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:35.986372   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:36.204180   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:36.409135   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:36.427440   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:36.486509   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:36.704475   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:36.908551   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:36.927963   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:36.987587   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:37.204388   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:37.408796   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:37.428171   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:37.486839   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:37.704908   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:37.908965   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:37.927355   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:37.987104   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:38.204638   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:38.407939   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:38.426879   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:38.486915   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:38.704558   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:38.908871   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:38.928510   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:38.986771   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:39.204810   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:39.409687   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:39.428310   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:39.486827   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:39.704560   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:39.908599   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:39.928760   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:39.987028   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:40.205259   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:40.408526   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:40.429184   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:40.488184   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:40.705329   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:40.908298   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:40.927386   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:40.986478   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:41.204560   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:41.408848   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:41.428221   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:41.511992   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:41.705101   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:41.909296   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:41.928081   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:41.987434   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:42.204490   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:42.408112   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:42.427173   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:42.486308   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:42.703637   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:42.908500   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:42.928045   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:42.987023   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:43.204836   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:43.409392   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:43.427926   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:43.529117   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:43.704883   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:43.908797   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:43.928099   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:43.986202   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:44.204550   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:44.408605   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:44.427985   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:44.487747   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:44.704371   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:44.967496   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:44.967508   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:44.986231   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:45.203762   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:45.408353   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:45.427230   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:45.527274   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:45.706042   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:45.907636   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:45.927658   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:45.987093   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:46.205151   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:46.409120   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:46.427227   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:46.486509   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:46.703976   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:46.908749   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:46.927183   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:46.986894   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:47.204716   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:47.408845   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:47.428548   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:47.528714   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:47.703984   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:47.908804   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:47.928321   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:47.986546   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:48.204202   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:48.407652   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:48.427587   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:48.507784   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:48.704912   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:48.909258   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:48.927386   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:48.986478   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:49.204306   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:49.408179   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:49.427086   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:49.508693   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:49.703889   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:49.908426   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:49.927684   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:49.987010   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:50.204660   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:50.408218   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:50.427076   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:50.487097   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:50.725146   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:50.909497   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:50.927949   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:50.987146   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:51.205046   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:51.407721   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:51.427443   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:51.486381   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:51.703724   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:51.908408   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:51.926940   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:52.008709   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:52.204119   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:52.408284   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:52.427383   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:52.487589   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:52.704708   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:52.908338   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:52.926968   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:52.986500   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:53.203447   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:53.407640   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:53.427468   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:53.486462   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:53.703973   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:53.909196   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:53.927461   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:53.986619   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:54.204635   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:54.410262   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:54.429174   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:54.487147   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:54.705215   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:54.908014   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:55.024754   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:55.024767   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:55.257505   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:55.408151   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:55.427615   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:55.487345   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:55.704893   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:55.908638   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:55.927490   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:55.986143   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:56.204842   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:56.409203   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:56.427665   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:56.486972   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:56.704655   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:56.908422   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:57.009059   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:57.009222   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:57.205352   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:57.408608   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:57.428265   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:57.487029   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:57.704732   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:57.908394   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:57.929414   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:57.987395   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:58.203793   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:58.409060   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:58.427201   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:58.486353   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:58.703534   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:58.907973   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:58.926903   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:58.986513   14179 kapi.go:107] duration metric: took 1m6.502613345s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 21:48:59.204595   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:59.410221   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:59.428457   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:59.704801   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:59.908657   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:59.976877   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:00.204380   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:00.408230   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:00.427500   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:00.704339   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:00.908035   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:00.927195   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:01.204945   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:01.411762   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:01.428575   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:01.704195   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:01.908544   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:01.927221   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:02.207063   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:02.408897   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:02.426861   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:02.704220   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:02.909230   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:02.927592   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:03.204863   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:03.465928   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:03.466241   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:03.703927   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:03.908630   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:03.927472   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:04.204193   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:04.408426   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:04.427965   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:04.704424   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:04.907719   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:04.927148   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:05.204388   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:05.409248   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:05.428498   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:05.705294   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:05.909029   14179 kapi.go:107] duration metric: took 1m12.50379732s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 21:49:05.927519   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:06.203794   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:06.427235   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:06.754169   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:06.928217   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:07.204399   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:07.428499   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:07.704573   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:07.928305   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:08.204158   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:08.427578   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:08.704300   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:08.928307   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:09.204516   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:09.427463   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:09.704472   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:09.930488   14179 kapi.go:107] duration metric: took 1m17.005797471s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 21:49:10.203731   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:10.704302   14179 kapi.go:107] duration metric: took 1m11.002988s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 21:49:10.705337   14179 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-418049 cluster.
	I1119 21:49:10.706242   14179 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 21:49:10.707072   14179 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 21:49:10.708119   14179 out.go:179] * Enabled addons: ingress-dns, metrics-server, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, inspektor-gadget, storage-provisioner, registry-creds, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1119 21:49:10.709029   14179 addons.go:515] duration metric: took 1m19.274995497s for enable addons: enabled=[ingress-dns metrics-server amd-gpu-device-plugin cloud-spanner nvidia-device-plugin inspektor-gadget storage-provisioner registry-creds yakd default-storageclass storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1119 21:49:10.709065   14179 start.go:247] waiting for cluster config update ...
	I1119 21:49:10.709089   14179 start.go:256] writing updated cluster config ...
	I1119 21:49:10.709316   14179 ssh_runner.go:195] Run: rm -f paused
	I1119 21:49:10.712972   14179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:49:10.715182   14179 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7v6rp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.718499   14179 pod_ready.go:94] pod "coredns-66bc5c9577-7v6rp" is "Ready"
	I1119 21:49:10.718517   14179 pod_ready.go:86] duration metric: took 3.316776ms for pod "coredns-66bc5c9577-7v6rp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.720007   14179 pod_ready.go:83] waiting for pod "etcd-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.723044   14179 pod_ready.go:94] pod "etcd-addons-418049" is "Ready"
	I1119 21:49:10.723063   14179 pod_ready.go:86] duration metric: took 3.034414ms for pod "etcd-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.724616   14179 pod_ready.go:83] waiting for pod "kube-apiserver-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.727579   14179 pod_ready.go:94] pod "kube-apiserver-addons-418049" is "Ready"
	I1119 21:49:10.727595   14179 pod_ready.go:86] duration metric: took 2.963666ms for pod "kube-apiserver-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.728994   14179 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:11.116753   14179 pod_ready.go:94] pod "kube-controller-manager-addons-418049" is "Ready"
	I1119 21:49:11.116778   14179 pod_ready.go:86] duration metric: took 387.766983ms for pod "kube-controller-manager-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:11.316532   14179 pod_ready.go:83] waiting for pod "kube-proxy-8rrhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:11.716236   14179 pod_ready.go:94] pod "kube-proxy-8rrhm" is "Ready"
	I1119 21:49:11.716260   14179 pod_ready.go:86] duration metric: took 399.707199ms for pod "kube-proxy-8rrhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:11.916989   14179 pod_ready.go:83] waiting for pod "kube-scheduler-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:12.315597   14179 pod_ready.go:94] pod "kube-scheduler-addons-418049" is "Ready"
	I1119 21:49:12.315619   14179 pod_ready.go:86] duration metric: took 398.608571ms for pod "kube-scheduler-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:12.315630   14179 pod_ready.go:40] duration metric: took 1.602635151s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:49:12.360149   14179 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 21:49:12.361555   14179 out.go:179] * Done! kubectl is now configured to use "addons-418049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.757749495Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-xglpl/POD" id=ad28209d-4091-49a1-bad8-479a34380021 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.757845721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.764975408Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-xglpl Namespace:default ID:356734a6074ff844639ee80cd936cb4ea18e7dd2b065372cd0544cfe0e975fdb UID:75a0bb5c-a34f-401f-b3e2-70f80867b323 NetNS:/var/run/netns/2203dcde-5bb0-4a13-a82d-23afc0bbe875 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520aa0}] Aliases:map[]}"
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.76501879Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-xglpl to CNI network \"kindnet\" (type=ptp)"
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.783254593Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-xglpl Namespace:default ID:356734a6074ff844639ee80cd936cb4ea18e7dd2b065372cd0544cfe0e975fdb UID:75a0bb5c-a34f-401f-b3e2-70f80867b323 NetNS:/var/run/netns/2203dcde-5bb0-4a13-a82d-23afc0bbe875 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520aa0}] Aliases:map[]}"
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.783437667Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-xglpl for CNI network kindnet (type=ptp)"
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.784720509Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.786045856Z" level=info msg="Ran pod sandbox 356734a6074ff844639ee80cd936cb4ea18e7dd2b065372cd0544cfe0e975fdb with infra container: default/hello-world-app-5d498dc89-xglpl/POD" id=ad28209d-4091-49a1-bad8-479a34380021 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.787239136Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=7073d69c-4c1b-4f2b-ad81-10aadeea9824 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.787376551Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=7073d69c-4c1b-4f2b-ad81-10aadeea9824 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.787424315Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=7073d69c-4c1b-4f2b-ad81-10aadeea9824 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.788050356Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=1112a1be-1a28-4d9c-bf68-1740651fe1f5 name=/runtime.v1.ImageService/PullImage
	Nov 19 21:52:02 addons-418049 crio[774]: time="2025-11-19T21:52:02.804787659Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.588955112Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=1112a1be-1a28-4d9c-bf68-1740651fe1f5 name=/runtime.v1.ImageService/PullImage
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.589433989Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=61e11baf-f4f8-4b91-99b6-679c807fe8b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.590837047Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a3a01f96-601b-48c3-88b9-812e855493f9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.594310887Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-xglpl/hello-world-app" id=37734e6a-1f5b-4580-a992-f7f3ae8ab4cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.594428691Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.599476077Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.599658206Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/d6fbf40e1f82e8a21aaf3aaa8dca27f61503777fd56431b591eba081fc9c3ca3/merged/etc/passwd: no such file or directory"
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.599689524Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/d6fbf40e1f82e8a21aaf3aaa8dca27f61503777fd56431b591eba081fc9c3ca3/merged/etc/group: no such file or directory"
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.599972377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.629145102Z" level=info msg="Created container f382865295187a560276133721ff1a93ff42811ca6ef360d1a4ea85bb9e89ce7: default/hello-world-app-5d498dc89-xglpl/hello-world-app" id=37734e6a-1f5b-4580-a992-f7f3ae8ab4cb name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.629657876Z" level=info msg="Starting container: f382865295187a560276133721ff1a93ff42811ca6ef360d1a4ea85bb9e89ce7" id=2411bc0f-5c10-4e50-a1a6-0bccfcb18e32 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 21:52:03 addons-418049 crio[774]: time="2025-11-19T21:52:03.631498385Z" level=info msg="Started container" PID=9978 containerID=f382865295187a560276133721ff1a93ff42811ca6ef360d1a4ea85bb9e89ce7 description=default/hello-world-app-5d498dc89-xglpl/hello-world-app id=2411bc0f-5c10-4e50-a1a6-0bccfcb18e32 name=/runtime.v1.RuntimeService/StartContainer sandboxID=356734a6074ff844639ee80cd936cb4ea18e7dd2b065372cd0544cfe0e975fdb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	f382865295187       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   356734a6074ff       hello-world-app-5d498dc89-xglpl            default
	a426bdfc52a0c       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   b7fc97f45493f       registry-creds-764b6fb674-j5lrp            kube-system
	fede47a462927       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   b494523e0e740       nginx                                      default
	e413d32a250e7       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   7d68704ff1904       busybox                                    default
	1c15072ba6f0d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   abb24fffdfd8f       gcp-auth-78565c9fb4-9cbs7                  gcp-auth
	846acb6dc9f45       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   c14b49a91426d       ingress-nginx-controller-6c8bf45fb-jbmr5   ingress-nginx
	615461f73700f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                   kube-system
	640ee0941acbe       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                   kube-system
	aa6d22c4422b5       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                   kube-system
	c34f9e102966e       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                   kube-system
	b14bbbfda2b31       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            3 minutes ago            Running             gadget                                   0                   1f2f24c99fd54       gadget-9ww4s                               gadget
	4fe0dd3f7607b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                   kube-system
	49f15b33a19b1       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   dad66d5865894       registry-proxy-znvmk                       kube-system
	ee296b974e145       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   a6f9009efca6b       amd-gpu-device-plugin-2tvsr                kube-system
	953cd4622bdea       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   990ae8691ce33       nvidia-device-plugin-daemonset-86rtv       kube-system
	cabdf495a7872       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   bc416f7a1b2ac       csi-hostpath-resizer-0                     kube-system
	8c2b153c22cf2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              patch                                    0                   e22af24488d53       ingress-nginx-admission-patch-qddgt        ingress-nginx
	254638acfaa6f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   73f35a767436d       snapshot-controller-7d9fbc56b8-rcvt9       kube-system
	57eb757d2767f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              create                                   0                   c151f3665cad4       ingress-nginx-admission-create-5rv6p       ingress-nginx
	a788e98cc9d95       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   2a6b288f0591e       local-path-provisioner-648f6765c9-rqhgx    local-path-storage
	702bace1b1665       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   9b010c1a2a4de       csi-hostpath-attacher-0                    kube-system
	36eea9c566cd0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                   kube-system
	acd4c407fc320       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   c67d84aeb9cc8       snapshot-controller-7d9fbc56b8-knb29       kube-system
	abf2da285e255       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   6ab5102654906       registry-6b586f9694-7pv4f                  kube-system
	963e25c80a5c1       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   941edd9914950       yakd-dashboard-5ff678cb9-9g826             yakd-dashboard
	c8c39f318fa44       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago            Running             cloud-spanner-emulator                   0                   7e18779ef0c6b       cloud-spanner-emulator-6f9fcf858b-tqntd    default
	a0a23eb827f27       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   ee9590b327fa6       metrics-server-85b7d694d7-ggkmz            kube-system
	acffddf4a9a12       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   28935190f110a       kube-ingress-dns-minikube                  kube-system
	f6a9f035506dd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   b3d5be5a1107f       coredns-66bc5c9577-7v6rp                   kube-system
	477c8991360a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   66a3b652fe301       storage-provisioner                        kube-system
	292ee6aa235ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   2e743af2fc9e6       kube-proxy-8rrhm                           kube-system
	a0d1b51e3bef7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   914f5b455ceef       kindnet-52bj8                              kube-system
	ffe9f59b44ecc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   e2873697df875       kube-controller-manager-addons-418049      kube-system
	3365bf7838fc3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   90942f1365463       etcd-addons-418049                         kube-system
	63b8d07a3ca42       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   71cec2a407c85       kube-apiserver-addons-418049               kube-system
	639836d21fa22       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   e29b3fc61b86d       kube-scheduler-addons-418049               kube-system
	
	
	==> coredns [f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d] <==
	[INFO] 10.244.0.22:55227 - 8589 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007653414s
	[INFO] 10.244.0.22:45336 - 945 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00602642s
	[INFO] 10.244.0.22:37719 - 28938 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006626025s
	[INFO] 10.244.0.22:54573 - 41399 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004478191s
	[INFO] 10.244.0.22:44115 - 28835 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005127919s
	[INFO] 10.244.0.22:43646 - 45961 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000756816s
	[INFO] 10.244.0.22:55612 - 2380 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001217887s
	[INFO] 10.244.0.27:33146 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000213781s
	[INFO] 10.244.0.27:60992 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164973s
	[INFO] 10.244.0.31:36986 - 25588 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00021142s
	[INFO] 10.244.0.31:46997 - 22357 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000277701s
	[INFO] 10.244.0.31:47876 - 48081 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000116959s
	[INFO] 10.244.0.31:38126 - 14249 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000155504s
	[INFO] 10.244.0.31:38263 - 18209 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000079226s
	[INFO] 10.244.0.31:56154 - 32128 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000116091s
	[INFO] 10.244.0.31:58751 - 46185 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.00432776s
	[INFO] 10.244.0.31:56324 - 46358 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004953129s
	[INFO] 10.244.0.31:37222 - 28992 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004039397s
	[INFO] 10.244.0.31:35031 - 3324 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.00443073s
	[INFO] 10.244.0.31:57695 - 54368 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004933639s
	[INFO] 10.244.0.31:45863 - 1249 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005681869s
	[INFO] 10.244.0.31:46144 - 37197 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004565829s
	[INFO] 10.244.0.31:56841 - 38122 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00464433s
	[INFO] 10.244.0.31:57657 - 45372 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001396934s
	[INFO] 10.244.0.31:48757 - 12677 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001686673s
	
	
	==> describe nodes <==
	Name:               addons-418049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-418049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=addons-418049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T21_47_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-418049
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-418049"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 21:47:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-418049
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 21:52:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 21:51:20 +0000   Wed, 19 Nov 2025 21:47:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 21:51:20 +0000   Wed, 19 Nov 2025 21:47:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 21:51:20 +0000   Wed, 19 Nov 2025 21:47:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 21:51:20 +0000   Wed, 19 Nov 2025 21:48:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-418049
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                c4731ef1-8c53-401e-85ea-2fbdcc5178dc
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  default                     cloud-spanner-emulator-6f9fcf858b-tqntd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  default                     hello-world-app-5d498dc89-xglpl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-9ww4s                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  gcp-auth                    gcp-auth-78565c9fb4-9cbs7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-jbmr5    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m11s
	  kube-system                 amd-gpu-device-plugin-2tvsr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 coredns-66bc5c9577-7v6rp                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m12s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 csi-hostpathplugin-2mv8p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 etcd-addons-418049                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m18s
	  kube-system                 kindnet-52bj8                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m12s
	  kube-system                 kube-apiserver-addons-418049                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-controller-manager-addons-418049       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-8rrhm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-scheduler-addons-418049                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 metrics-server-85b7d694d7-ggkmz             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m11s
	  kube-system                 nvidia-device-plugin-daemonset-86rtv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 registry-6b586f9694-7pv4f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 registry-creds-764b6fb674-j5lrp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 registry-proxy-znvmk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 snapshot-controller-7d9fbc56b8-knb29        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 snapshot-controller-7d9fbc56b8-rcvt9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m10s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  local-path-storage          local-path-provisioner-648f6765c9-rqhgx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9g826              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m11s  kube-proxy       
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node addons-418049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node addons-418049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node addons-418049 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m13s  node-controller  Node addons-418049 event: Registered Node addons-418049 in Controller
	  Normal  NodeReady                3m31s  kubelet          Node addons-418049 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9] <==
	{"level":"warn","ts":"2025-11-19T21:47:42.749935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.755853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.768896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.774132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.779683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.784968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.790371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.795846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.802024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.807542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.819145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.824860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.838968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.841999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.847471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.854900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.900697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:53.777394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:53.784144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:48:20.276127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:48:20.282129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:48:20.296645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:48:20.303755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T21:48:44.965953Z","caller":"traceutil/trace.go:172","msg":"trace[1611484485] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"119.176969ms","start":"2025-11-19T21:48:44.846761Z","end":"2025-11-19T21:48:44.965938Z","steps":["trace[1611484485] 'process raft request'  (duration: 54.432746ms)","trace[1611484485] 'compare'  (duration: 64.65383ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T21:49:03.463280Z","caller":"traceutil/trace.go:172","msg":"trace[1687762284] transaction","detail":"{read_only:false; response_revision:1151; number_of_response:1; }","duration":"100.063035ms","start":"2025-11-19T21:49:03.363202Z","end":"2025-11-19T21:49:03.463265Z","steps":["trace[1687762284] 'process raft request'  (duration: 99.904507ms)"],"step_count":1}
	
	
	==> gcp-auth [1c15072ba6f0d836406454de7a4723016c9b47a78d679dfb3effec98451a82c6] <==
	2025/11/19 21:49:09 GCP Auth Webhook started!
	2025/11/19 21:49:12 Ready to marshal response ...
	2025/11/19 21:49:12 Ready to write response ...
	2025/11/19 21:49:12 Ready to marshal response ...
	2025/11/19 21:49:12 Ready to write response ...
	2025/11/19 21:49:12 Ready to marshal response ...
	2025/11/19 21:49:12 Ready to write response ...
	2025/11/19 21:49:20 Ready to marshal response ...
	2025/11/19 21:49:20 Ready to write response ...
	2025/11/19 21:49:20 Ready to marshal response ...
	2025/11/19 21:49:20 Ready to write response ...
	2025/11/19 21:49:30 Ready to marshal response ...
	2025/11/19 21:49:30 Ready to write response ...
	2025/11/19 21:49:30 Ready to marshal response ...
	2025/11/19 21:49:30 Ready to write response ...
	2025/11/19 21:49:34 Ready to marshal response ...
	2025/11/19 21:49:34 Ready to write response ...
	2025/11/19 21:49:39 Ready to marshal response ...
	2025/11/19 21:49:39 Ready to write response ...
	2025/11/19 21:49:56 Ready to marshal response ...
	2025/11/19 21:49:56 Ready to write response ...
	2025/11/19 21:52:02 Ready to marshal response ...
	2025/11/19 21:52:02 Ready to write response ...
	
	
	==> kernel <==
	 21:52:04 up 34 min,  0 user,  load average: 1.15, 0.83, 0.40
	Linux addons-418049 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833] <==
	I1119 21:50:02.301313       1 main.go:301] handling current node
	I1119 21:50:12.306722       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:50:12.306757       1 main.go:301] handling current node
	I1119 21:50:22.305564       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:50:22.305600       1 main.go:301] handling current node
	I1119 21:50:32.302175       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:50:32.302220       1 main.go:301] handling current node
	I1119 21:50:42.308045       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:50:42.308071       1 main.go:301] handling current node
	I1119 21:50:52.299436       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:50:52.299468       1 main.go:301] handling current node
	I1119 21:51:02.306193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:02.306224       1 main.go:301] handling current node
	I1119 21:51:12.305428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:12.305456       1 main.go:301] handling current node
	I1119 21:51:22.299375       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:22.299419       1 main.go:301] handling current node
	I1119 21:51:32.299444       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:32.299473       1 main.go:301] handling current node
	I1119 21:51:42.300846       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:42.300875       1 main.go:301] handling current node
	I1119 21:51:52.298853       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:51:52.298888       1 main.go:301] handling current node
	I1119 21:52:02.305446       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:52:02.305480       1 main.go:301] handling current node
	
	
	==> kube-apiserver [63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1119 21:48:42.544648       1 handler_proxy.go:99] no RequestInfo found in the context
	W1119 21:48:42.544670       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:48:42.544689       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1119 21:48:42.544703       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1119 21:48:42.544720       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1119 21:48:42.545830       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1119 21:48:46.551111       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:48:46.551342       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.38.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.38.199:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1119 21:48:46.551371       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1119 21:48:46.553669       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 21:49:19.997233       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42356: use of closed network connection
	E1119 21:49:20.136937       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42382: use of closed network connection
	I1119 21:49:39.667301       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1119 21:49:39.852383       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.25.204"}
	I1119 21:49:45.710346       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1119 21:52:02.528480       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.100.43.193"}
	
	
	==> kube-controller-manager [ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12] <==
	I1119 21:47:50.260334       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 21:47:50.260443       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 21:47:50.260569       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 21:47:50.260601       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 21:47:50.260624       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 21:47:50.260764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 21:47:50.260993       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 21:47:50.261716       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 21:47:50.261783       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 21:47:50.262425       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 21:47:50.263753       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 21:47:50.266492       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:47:50.268688       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 21:47:50.269787       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 21:47:50.275043       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 21:47:50.282397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1119 21:47:52.593732       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 21:48:20.270998       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 21:48:20.271120       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 21:48:20.271159       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 21:48:20.289245       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 21:48:20.292242       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 21:48:20.371769       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:48:20.392958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 21:48:35.217205       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59] <==
	I1119 21:47:51.804321       1 server_linux.go:53] "Using iptables proxy"
	I1119 21:47:52.063197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 21:47:52.165657       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 21:47:52.168873       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 21:47:52.174880       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 21:47:52.500054       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 21:47:52.500294       1 server_linux.go:132] "Using iptables Proxier"
	I1119 21:47:52.521826       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 21:47:52.530439       1 server.go:527] "Version info" version="v1.34.1"
	I1119 21:47:52.530532       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:47:52.535727       1 config.go:200] "Starting service config controller"
	I1119 21:47:52.536051       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 21:47:52.536757       1 config.go:106] "Starting endpoint slice config controller"
	I1119 21:47:52.536780       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 21:47:52.536836       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 21:47:52.536843       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 21:47:52.537792       1 config.go:309] "Starting node config controller"
	I1119 21:47:52.537811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 21:47:52.539005       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 21:47:52.637732       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 21:47:52.639648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 21:47:52.639675       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98] <==
	E1119 21:47:43.287291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 21:47:43.287474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 21:47:43.289446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 21:47:43.289450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:47:43.289554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:47:43.289606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:47:43.289609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 21:47:43.289663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 21:47:43.289685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 21:47:43.290525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 21:47:43.290525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:47:43.290586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 21:47:43.290601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:47:43.290688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 21:47:43.290707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:47:43.290765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 21:47:43.290769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 21:47:44.115292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:47:44.170384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:47:44.275666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:47:44.336443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:47:44.347289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 21:47:44.358201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 21:47:44.495974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1119 21:47:46.686510       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.449280    1280 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^af745c70-c591-11f0-a486-46ab2516ba23\") pod \"c37a8d01-75b5-4987-bd4a-986e609a9128\" (UID: \"c37a8d01-75b5-4987-bd4a-986e609a9128\") "
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.449267    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c37a8d01-75b5-4987-bd4a-986e609a9128-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "c37a8d01-75b5-4987-bd4a-986e609a9128" (UID: "c37a8d01-75b5-4987-bd4a-986e609a9128"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.449398    1280 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c37a8d01-75b5-4987-bd4a-986e609a9128-gcp-creds\") on node \"addons-418049\" DevicePath \"\""
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.451226    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37a8d01-75b5-4987-bd4a-986e609a9128-kube-api-access-rc4sm" (OuterVolumeSpecName: "kube-api-access-rc4sm") pod "c37a8d01-75b5-4987-bd4a-986e609a9128" (UID: "c37a8d01-75b5-4987-bd4a-986e609a9128"). InnerVolumeSpecName "kube-api-access-rc4sm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.452155    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^af745c70-c591-11f0-a486-46ab2516ba23" (OuterVolumeSpecName: "task-pv-storage") pod "c37a8d01-75b5-4987-bd4a-986e609a9128" (UID: "c37a8d01-75b5-4987-bd4a-986e609a9128"). InnerVolumeSpecName "pvc-cbec882a-9edd-4c04-8689-7d40ca6dbed2". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.550640    1280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rc4sm\" (UniqueName: \"kubernetes.io/projected/c37a8d01-75b5-4987-bd4a-986e609a9128-kube-api-access-rc4sm\") on node \"addons-418049\" DevicePath \"\""
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.550688    1280 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-cbec882a-9edd-4c04-8689-7d40ca6dbed2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^af745c70-c591-11f0-a486-46ab2516ba23\") on node \"addons-418049\" "
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.555141    1280 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-cbec882a-9edd-4c04-8689-7d40ca6dbed2" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^af745c70-c591-11f0-a486-46ab2516ba23") on node "addons-418049"
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.651626    1280 reconciler_common.go:299] "Volume detached for volume \"pvc-cbec882a-9edd-4c04-8689-7d40ca6dbed2\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^af745c70-c591-11f0-a486-46ab2516ba23\") on node \"addons-418049\" DevicePath \"\""
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.812448    1280 scope.go:117] "RemoveContainer" containerID="2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326"
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.822133    1280 scope.go:117] "RemoveContainer" containerID="2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326"
	Nov 19 21:50:03 addons-418049 kubelet[1280]: E1119 21:50:03.822487    1280 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326\": container with ID starting with 2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326 not found: ID does not exist" containerID="2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326"
	Nov 19 21:50:03 addons-418049 kubelet[1280]: I1119 21:50:03.822524    1280 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326"} err="failed to get container status \"2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326\": rpc error: code = NotFound desc = could not find container \"2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326\": container with ID starting with 2a5f7c9d1a40a027d8f28f0dc6c5a396eb032024395dd68bfb7fbc639a6b6326 not found: ID does not exist"
	Nov 19 21:50:05 addons-418049 kubelet[1280]: I1119 21:50:05.302697    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c37a8d01-75b5-4987-bd4a-986e609a9128" path="/var/lib/kubelet/pods/c37a8d01-75b5-4987-bd4a-986e609a9128/volumes"
	Nov 19 21:50:10 addons-418049 kubelet[1280]: I1119 21:50:10.301502    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-86rtv" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:50:14 addons-418049 kubelet[1280]: I1119 21:50:14.300766    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2tvsr" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:50:18 addons-418049 kubelet[1280]: I1119 21:50:18.300618    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-znvmk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:50:35 addons-418049 kubelet[1280]: E1119 21:50:35.812473    1280 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-j5lrp" podUID="eefdd28e-9cfa-4e4a-8c18-ecececdc9c06"
	Nov 19 21:51:14 addons-418049 kubelet[1280]: I1119 21:51:14.301185    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-86rtv" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:51:21 addons-418049 kubelet[1280]: I1119 21:51:21.301513    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-2tvsr" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:51:30 addons-418049 kubelet[1280]: I1119 21:51:30.300900    1280 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-znvmk" secret="" err="secret \"gcp-auth\" not found"
	Nov 19 21:52:02 addons-418049 kubelet[1280]: I1119 21:52:02.450842    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-j5lrp" podStartSLOduration=249.376727049 podStartE2EDuration="4m10.450805797s" podCreationTimestamp="2025-11-19 21:47:52 +0000 UTC" firstStartedPulling="2025-11-19 21:50:47.329662832 +0000 UTC m=+182.107640192" lastFinishedPulling="2025-11-19 21:50:48.403741567 +0000 UTC m=+183.181718940" observedRunningTime="2025-11-19 21:50:48.976811403 +0000 UTC m=+183.754788783" watchObservedRunningTime="2025-11-19 21:52:02.450805797 +0000 UTC m=+257.228783177"
	Nov 19 21:52:02 addons-418049 kubelet[1280]: I1119 21:52:02.638024    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/75a0bb5c-a34f-401f-b3e2-70f80867b323-gcp-creds\") pod \"hello-world-app-5d498dc89-xglpl\" (UID: \"75a0bb5c-a34f-401f-b3e2-70f80867b323\") " pod="default/hello-world-app-5d498dc89-xglpl"
	Nov 19 21:52:02 addons-418049 kubelet[1280]: I1119 21:52:02.638193    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t96jr\" (UniqueName: \"kubernetes.io/projected/75a0bb5c-a34f-401f-b3e2-70f80867b323-kube-api-access-t96jr\") pod \"hello-world-app-5d498dc89-xglpl\" (UID: \"75a0bb5c-a34f-401f-b3e2-70f80867b323\") " pod="default/hello-world-app-5d498dc89-xglpl"
	Nov 19 21:52:04 addons-418049 kubelet[1280]: I1119 21:52:04.236545    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-xglpl" podStartSLOduration=1.4340664410000001 podStartE2EDuration="2.236526545s" podCreationTimestamp="2025-11-19 21:52:02 +0000 UTC" firstStartedPulling="2025-11-19 21:52:02.787735204 +0000 UTC m=+257.565712574" lastFinishedPulling="2025-11-19 21:52:03.590195311 +0000 UTC m=+258.368172678" observedRunningTime="2025-11-19 21:52:04.235692329 +0000 UTC m=+259.013669708" watchObservedRunningTime="2025-11-19 21:52:04.236526545 +0000 UTC m=+259.014503925"
	
	
	==> storage-provisioner [477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6] <==
	W1119 21:51:39.984553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:41.986620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:41.990682       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:43.993305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:43.996960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:45.999191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:46.003739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:48.006177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:48.010096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:50.012026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:50.014859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:52.017521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:52.021584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:54.024237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:54.027703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:56.030650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:56.034181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:58.036148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:51:58.040327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:00.042854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:00.046041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:02.048294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:02.051622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:04.054448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:52:04.058664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-418049 -n addons-418049
helpers_test.go:269: (dbg) Run:  kubectl --context addons-418049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-5rv6p ingress-nginx-admission-patch-qddgt
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-418049 describe pod ingress-nginx-admission-create-5rv6p ingress-nginx-admission-patch-qddgt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-418049 describe pod ingress-nginx-admission-create-5rv6p ingress-nginx-admission-patch-qddgt: exit status 1 (55.178631ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5rv6p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qddgt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-418049 describe pod ingress-nginx-admission-create-5rv6p ingress-nginx-admission-patch-qddgt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (225.956582ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:04.854456   28905 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:04.854627   28905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:04.854637   28905 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:04.854641   28905 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:04.854846   28905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:52:04.855077   28905 mustload.go:66] Loading cluster: addons-418049
	I1119 21:52:04.855402   28905 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:04.855416   28905 addons.go:607] checking whether the cluster is paused
	I1119 21:52:04.855496   28905 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:04.855508   28905 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:52:04.855896   28905 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:52:04.873110   28905 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:04.873166   28905 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:52:04.889340   28905 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:52:04.978774   28905 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:04.978899   28905 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:05.006691   28905 cri.go:89] found id: "a426bdfc52a0c41d4a9b2c7d7149521742fe298d1e22caab6fa5cdc4000cae93"
	I1119 21:52:05.006709   28905 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:52:05.006714   28905 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:52:05.006719   28905 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:52:05.006722   28905 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:52:05.006728   28905 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:52:05.006732   28905 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:52:05.006736   28905 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:52:05.006740   28905 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:52:05.006758   28905 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:52:05.006766   28905 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:52:05.006770   28905 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:52:05.006775   28905 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:52:05.006779   28905 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:52:05.006784   28905 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:52:05.006800   28905 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:52:05.006809   28905 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:52:05.006834   28905 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:52:05.006838   28905 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:52:05.006842   28905 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:52:05.006850   28905 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:52:05.006854   28905 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:52:05.006858   28905 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:52:05.006863   28905 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:52:05.006870   28905 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:52:05.006875   28905 cri.go:89] found id: ""
	I1119 21:52:05.006918   28905 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:05.019836   28905 out.go:203] 
	W1119 21:52:05.020962   28905 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:05.020980   28905 out.go:285] * 
	* 
	W1119 21:52:05.023998   28905 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:05.024987   28905 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable ingress --alsologtostderr -v=1: exit status 11 (226.777007ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:52:05.080098   28969 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:52:05.080380   28969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:05.080389   28969 out.go:374] Setting ErrFile to fd 2...
	I1119 21:52:05.080394   28969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:52:05.080584   28969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:52:05.080836   28969 mustload.go:66] Loading cluster: addons-418049
	I1119 21:52:05.081147   28969 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:05.081160   28969 addons.go:607] checking whether the cluster is paused
	I1119 21:52:05.081241   28969 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:52:05.081273   28969 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:52:05.081612   28969 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:52:05.098919   28969 ssh_runner.go:195] Run: systemctl --version
	I1119 21:52:05.098971   28969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:52:05.115321   28969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:52:05.204476   28969 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:52:05.204554   28969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:52:05.232398   28969 cri.go:89] found id: "a426bdfc52a0c41d4a9b2c7d7149521742fe298d1e22caab6fa5cdc4000cae93"
	I1119 21:52:05.232423   28969 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:52:05.232427   28969 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:52:05.232431   28969 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:52:05.232435   28969 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:52:05.232439   28969 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:52:05.232441   28969 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:52:05.232444   28969 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:52:05.232446   28969 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:52:05.232456   28969 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:52:05.232458   28969 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:52:05.232462   28969 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:52:05.232464   28969 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:52:05.232467   28969 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:52:05.232470   28969 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:52:05.232476   28969 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:52:05.232483   28969 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:52:05.232487   28969 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:52:05.232489   28969 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:52:05.232491   28969 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:52:05.232493   28969 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:52:05.232496   28969 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:52:05.232498   28969 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:52:05.232502   28969 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:52:05.232505   28969 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:52:05.232508   28969 cri.go:89] found id: ""
	I1119 21:52:05.232553   28969 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:52:05.246504   28969 out.go:203] 
	W1119 21:52:05.247717   28969 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:52:05.247743   28969 out.go:285] * 
	* 
	W1119 21:52:05.251095   28969 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:52:05.252217   28969 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.84s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-9ww4s" [519748a3-adeb-40d6-b608-aa21e679a39a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003375147s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (278.381673ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:37.924555   23939 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:37.924716   23939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:37.924729   23939 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:37.924735   23939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:37.925035   23939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:37.925328   23939 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:37.925849   23939 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:37.925866   23939 addons.go:607] checking whether the cluster is paused
	I1119 21:49:37.926013   23939 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:37.926029   23939 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:37.926533   23939 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:37.948871   23939 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:37.948932   23939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:37.971043   23939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:38.070835   23939 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:38.070911   23939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:38.105753   23939 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:38.105788   23939 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:38.105794   23939 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:38.105798   23939 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:38.105803   23939 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:38.105808   23939 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:38.105833   23939 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:38.105837   23939 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:38.105841   23939 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:38.105855   23939 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:38.105859   23939 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:38.105864   23939 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:38.105868   23939 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:38.105872   23939 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:38.105876   23939 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:38.105892   23939 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:38.105902   23939 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:38.105908   23939 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:38.105912   23939 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:38.105915   23939 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:38.105923   23939 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:38.105927   23939 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:38.105931   23939 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:38.105934   23939 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:38.105938   23939 cri.go:89] found id: ""
	I1119 21:49:38.105998   23939 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:38.124699   23939 out.go:203] 
	W1119 21:49:38.126084   23939 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:38.126114   23939 out.go:285] * 
	* 
	W1119 21:49:38.130783   23939 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:38.132079   23939 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.225145ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003175973s
addons_test.go:463: (dbg) Run:  kubectl --context addons-418049 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (241.325019ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:39.233450   24479 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:39.233606   24479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:39.233616   24479 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:39.233620   24479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:39.233808   24479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:39.234069   24479 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:39.234381   24479 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:39.234393   24479 addons.go:607] checking whether the cluster is paused
	I1119 21:49:39.234470   24479 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:39.234480   24479 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:39.234854   24479 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:39.253923   24479 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:39.253977   24479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:39.272638   24479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:39.365358   24479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:39.365429   24479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:39.394893   24479 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:39.394919   24479 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:39.394925   24479 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:39.394931   24479 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:39.394935   24479 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:39.394940   24479 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:39.394944   24479 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:39.394949   24479 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:39.394953   24479 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:39.394965   24479 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:39.394972   24479 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:39.394976   24479 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:39.394978   24479 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:39.394980   24479 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:39.394983   24479 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:39.394995   24479 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:39.395004   24479 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:39.395010   24479 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:39.395014   24479 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:39.395017   24479 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:39.395025   24479 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:39.395033   24479 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:39.395037   24479 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:39.395041   24479 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:39.395045   24479 cri.go:89] found id: ""
	I1119 21:49:39.395101   24479 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:39.407777   24479 out.go:203] 
	W1119 21:49:39.412936   24479 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:39.412957   24479 out.go:285] * 
	* 
	W1119 21:49:39.415948   24479 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:39.417008   24479 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (44.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.635205ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-418049 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-418049 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [8f850572-f75c-4891-8dd2-7e45898053b9] Pending
helpers_test.go:352: "task-pv-pod" [8f850572-f75c-4891-8dd2-7e45898053b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [8f850572-f75c-4891-8dd2-7e45898053b9] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.002887082s
addons_test.go:572: (dbg) Run:  kubectl --context addons-418049 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-418049 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-418049 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-418049 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-418049 delete pod task-pv-pod: (1.000334296s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-418049 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-418049 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-418049 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [c37a8d01-75b5-4987-bd4a-986e609a9128] Pending
helpers_test.go:352: "task-pv-pod-restore" [c37a8d01-75b5-4987-bd4a-986e609a9128] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [c37a8d01-75b5-4987-bd4a-986e609a9128] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003425898s
addons_test.go:614: (dbg) Run:  kubectl --context addons-418049 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-418049 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-418049 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (223.497087ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:50:04.193309   26532 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:50:04.193570   26532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:50:04.193580   26532 out.go:374] Setting ErrFile to fd 2...
	I1119 21:50:04.193583   26532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:50:04.193763   26532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:50:04.194000   26532 mustload.go:66] Loading cluster: addons-418049
	I1119 21:50:04.194320   26532 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:50:04.194334   26532 addons.go:607] checking whether the cluster is paused
	I1119 21:50:04.194412   26532 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:50:04.194423   26532 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:50:04.194740   26532 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:50:04.211673   26532 ssh_runner.go:195] Run: systemctl --version
	I1119 21:50:04.211725   26532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:50:04.227912   26532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:50:04.317748   26532 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:50:04.317828   26532 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:50:04.344313   26532 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:50:04.344330   26532 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:50:04.344336   26532 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:50:04.344340   26532 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:50:04.344344   26532 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:50:04.344349   26532 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:50:04.344353   26532 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:50:04.344356   26532 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:50:04.344359   26532 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:50:04.344381   26532 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:50:04.344387   26532 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:50:04.344392   26532 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:50:04.344398   26532 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:50:04.344405   26532 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:50:04.344415   26532 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:50:04.344425   26532 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:50:04.344429   26532 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:50:04.344435   26532 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:50:04.344439   26532 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:50:04.344443   26532 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:50:04.344446   26532 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:50:04.344450   26532 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:50:04.344454   26532 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:50:04.344459   26532 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:50:04.344465   26532 cri.go:89] found id: ""
	I1119 21:50:04.344506   26532 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:50:04.357512   26532 out.go:203] 
	W1119 21:50:04.358553   26532 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:50:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:50:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:50:04.358574   26532 out.go:285] * 
	* 
	W1119 21:50:04.361543   26532 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:50:04.362623   26532 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (228.906196ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:50:04.420195   26594 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:50:04.420472   26594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:50:04.420485   26594 out.go:374] Setting ErrFile to fd 2...
	I1119 21:50:04.420490   26594 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:50:04.420675   26594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:50:04.420926   26594 mustload.go:66] Loading cluster: addons-418049
	I1119 21:50:04.421262   26594 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:50:04.421281   26594 addons.go:607] checking whether the cluster is paused
	I1119 21:50:04.421404   26594 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:50:04.421418   26594 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:50:04.421772   26594 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:50:04.440124   26594 ssh_runner.go:195] Run: systemctl --version
	I1119 21:50:04.440206   26594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:50:04.457639   26594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:50:04.547875   26594 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:50:04.547943   26594 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:50:04.574161   26594 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:50:04.574183   26594 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:50:04.574189   26594 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:50:04.574194   26594 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:50:04.574198   26594 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:50:04.574202   26594 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:50:04.574205   26594 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:50:04.574207   26594 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:50:04.574209   26594 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:50:04.574215   26594 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:50:04.574217   26594 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:50:04.574219   26594 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:50:04.574224   26594 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:50:04.574231   26594 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:50:04.574234   26594 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:50:04.574238   26594 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:50:04.574241   26594 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:50:04.574244   26594 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:50:04.574247   26594 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:50:04.574249   26594 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:50:04.574252   26594 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:50:04.574256   26594 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:50:04.574260   26594 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:50:04.574264   26594 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:50:04.574268   26594 cri.go:89] found id: ""
	I1119 21:50:04.574308   26594 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:50:04.586742   26594 out.go:203] 
	W1119 21:50:04.587957   26594 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:50:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:50:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:50:04.587977   26594 out.go:285] * 
	* 
	W1119 21:50:04.591058   26594 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:50:04.592133   26594 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (44.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-418049 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-418049 --alsologtostderr -v=1: exit status 11 (268.29745ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:38.204789   24002 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:38.205120   24002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:38.205137   24002 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:38.205144   24002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:38.205421   24002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:38.205763   24002 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:38.206279   24002 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:38.206309   24002 addons.go:607] checking whether the cluster is paused
	I1119 21:49:38.206448   24002 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:38.206468   24002 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:38.207032   24002 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:38.230196   24002 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:38.230253   24002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:38.250795   24002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:38.349195   24002 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:38.349292   24002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:38.381847   24002 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:38.381885   24002 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:38.381891   24002 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:38.381896   24002 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:38.381901   24002 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:38.381907   24002 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:38.381912   24002 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:38.381916   24002 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:38.381920   24002 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:38.381934   24002 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:38.381944   24002 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:38.381949   24002 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:38.381957   24002 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:38.381961   24002 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:38.381968   24002 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:38.381984   24002 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:38.381993   24002 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:38.382000   24002 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:38.382004   24002 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:38.382008   24002 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:38.382011   24002 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:38.382016   24002 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:38.382019   24002 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:38.382023   24002 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:38.382027   24002 cri.go:89] found id: ""
	I1119 21:49:38.382088   24002 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:38.396751   24002 out.go:203] 
	W1119 21:49:38.398133   24002 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:38.398156   24002 out.go:285] * 
	* 
	W1119 21:49:38.401143   24002 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:38.402228   24002 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-418049 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-418049
helpers_test.go:243: (dbg) docker inspect addons-418049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56",
	        "Created": "2025-11-19T21:47:32.785501192Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14834,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T21:47:32.812777816Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56/hostname",
	        "HostsPath": "/var/lib/docker/containers/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56/hosts",
	        "LogPath": "/var/lib/docker/containers/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56/2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56-json.log",
	        "Name": "/addons-418049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-418049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-418049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2587ae0574ec366503e5ce4bc1bce84818f19bff6275942c0345e532fb286e56",
	                "LowerDir": "/var/lib/docker/overlay2/9bb5febb853bf51136be44320b0dbb0859e9b690dd21ae57082ee435562fc7f1-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bb5febb853bf51136be44320b0dbb0859e9b690dd21ae57082ee435562fc7f1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bb5febb853bf51136be44320b0dbb0859e9b690dd21ae57082ee435562fc7f1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bb5febb853bf51136be44320b0dbb0859e9b690dd21ae57082ee435562fc7f1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-418049",
	                "Source": "/var/lib/docker/volumes/addons-418049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-418049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-418049",
	                "name.minikube.sigs.k8s.io": "addons-418049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "76ebb1c02aec5768f4c9b0afad928936fcb43c9871d4dfaa07be51420650a2d9",
	            "SandboxKey": "/var/run/docker/netns/76ebb1c02aec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-418049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3686714f91f96d551d1f231e1e1262ba4f1933bd595b20619b47187081139dc2",
	                    "EndpointID": "4082a882a07fae38d7bf161424b6efe0fcc3608cebc14f63617508651047c376",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "ee:fe:9a:5a:68:f0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-418049",
	                        "2587ae0574ec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-418049 -n addons-418049
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-418049 logs -n 25: (1.130275705s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ delete  │ -p download-only-797272                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-797272   │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ start   │ -o=json --download-only -p download-only-761775 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-761775   │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ delete  │ -p download-only-761775                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-761775   │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ delete  │ -p download-only-797272                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-797272   │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ delete  │ -p download-only-761775                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-761775   │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ start   │ --download-only -p download-docker-279060 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-279060 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ delete  │ -p download-docker-279060                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-279060 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ start   │ --download-only -p binary-mirror-562747 --alsologtostderr --binary-mirror http://127.0.0.1:39249 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-562747   │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ delete  │ -p binary-mirror-562747                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-562747   │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ addons  │ disable dashboard -p addons-418049                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ addons  │ enable dashboard -p addons-418049                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	│ start   │ -p addons-418049 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:49 UTC │
	│ addons  │ addons-418049 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ ssh     │ addons-418049 ssh cat /opt/local-path-provisioner/pvc-507d12fa-be38-43d5-a275-67581d2b4b4d_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │ 19 Nov 25 21:49 UTC │
	│ addons  │ addons-418049 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ ip      │ addons-418049 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │ 19 Nov 25 21:49 UTC │
	│ addons  │ addons-418049 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ addons-418049 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	│ addons  │ enable headlamp -p addons-418049 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-418049          │ jenkins │ v1.37.0 │ 19 Nov 25 21:49 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:09.186037   14179 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:09.186244   14179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:09.186252   14179 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:09.186255   14179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:09.186537   14179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:47:09.187653   14179 out.go:368] Setting JSON to false
	I1119 21:47:09.188545   14179 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1777,"bootTime":1763587052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:47:09.188634   14179 start.go:143] virtualization: kvm guest
	I1119 21:47:09.189989   14179 out.go:179] * [addons-418049] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:47:09.191259   14179 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:47:09.191271   14179 notify.go:221] Checking for updates...
	I1119 21:47:09.193263   14179 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:09.194515   14179 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:47:09.195534   14179 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 21:47:09.196569   14179 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:47:09.197539   14179 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:47:09.198568   14179 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:47:09.221213   14179 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:47:09.221276   14179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:09.275679   14179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 21:47:09.267087618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:09.275774   14179 docker.go:319] overlay module found
	I1119 21:47:09.277301   14179 out.go:179] * Using the docker driver based on user configuration
	I1119 21:47:09.278412   14179 start.go:309] selected driver: docker
	I1119 21:47:09.278424   14179 start.go:930] validating driver "docker" against <nil>
	I1119 21:47:09.278434   14179 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:47:09.279005   14179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:09.332641   14179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-19 21:47:09.323311203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:09.332840   14179 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:47:09.333087   14179 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:47:09.334388   14179 out.go:179] * Using Docker driver with root privileges
	I1119 21:47:09.335409   14179 cni.go:84] Creating CNI manager for ""
	I1119 21:47:09.335459   14179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:47:09.335470   14179 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:47:09.335517   14179 start.go:353] cluster config:
	{Name:addons-418049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1119 21:47:09.336729   14179 out.go:179] * Starting "addons-418049" primary control-plane node in "addons-418049" cluster
	I1119 21:47:09.337723   14179 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 21:47:09.338608   14179 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:47:09.339431   14179 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:47:09.339469   14179 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 21:47:09.339481   14179 cache.go:65] Caching tarball of preloaded images
	I1119 21:47:09.339514   14179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:47:09.339574   14179 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 21:47:09.339589   14179 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 21:47:09.339971   14179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/config.json ...
	I1119 21:47:09.340000   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/config.json: {Name:mkd3486f71ee715842f91dc3decfe65edfd45631 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:09.354439   14179 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:47:09.354545   14179 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:47:09.354560   14179 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory, skipping pull
	I1119 21:47:09.354564   14179 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in cache, skipping pull
	I1119 21:47:09.354573   14179 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 as a tarball
	I1119 21:47:09.354578   14179 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 from local cache
	I1119 21:47:21.330767   14179 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 from cached tarball
	I1119 21:47:21.330828   14179 cache.go:243] Successfully downloaded all kic artifacts
	I1119 21:47:21.330882   14179 start.go:360] acquireMachinesLock for addons-418049: {Name:mk275dc52626d848e0f0a8364f95fd04a2a58c88 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 21:47:21.330984   14179 start.go:364] duration metric: took 80.484µs to acquireMachinesLock for "addons-418049"
	I1119 21:47:21.331012   14179 start.go:93] Provisioning new machine with config: &{Name:addons-418049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:47:21.331102   14179 start.go:125] createHost starting for "" (driver="docker")
	I1119 21:47:21.332729   14179 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1119 21:47:21.332975   14179 start.go:159] libmachine.API.Create for "addons-418049" (driver="docker")
	I1119 21:47:21.333010   14179 client.go:173] LocalClient.Create starting
	I1119 21:47:21.333120   14179 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem
	I1119 21:47:21.709623   14179 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem
	I1119 21:47:21.919769   14179 cli_runner.go:164] Run: docker network inspect addons-418049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 21:47:21.937062   14179 cli_runner.go:211] docker network inspect addons-418049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 21:47:21.937123   14179 network_create.go:284] running [docker network inspect addons-418049] to gather additional debugging logs...
	I1119 21:47:21.937140   14179 cli_runner.go:164] Run: docker network inspect addons-418049
	W1119 21:47:21.952363   14179 cli_runner.go:211] docker network inspect addons-418049 returned with exit code 1
	I1119 21:47:21.952383   14179 network_create.go:287] error running [docker network inspect addons-418049]: docker network inspect addons-418049: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-418049 not found
	I1119 21:47:21.952394   14179 network_create.go:289] output of [docker network inspect addons-418049]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-418049 not found
	
	** /stderr **
	I1119 21:47:21.952500   14179 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:47:21.967992   14179 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c8add0}
	I1119 21:47:21.968020   14179 network_create.go:124] attempt to create docker network addons-418049 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1119 21:47:21.968064   14179 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-418049 addons-418049
	I1119 21:47:22.009935   14179 network_create.go:108] docker network addons-418049 192.168.49.0/24 created
	I1119 21:47:22.009961   14179 kic.go:121] calculated static IP "192.168.49.2" for the "addons-418049" container
	I1119 21:47:22.010014   14179 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 21:47:22.025652   14179 cli_runner.go:164] Run: docker volume create addons-418049 --label name.minikube.sigs.k8s.io=addons-418049 --label created_by.minikube.sigs.k8s.io=true
	I1119 21:47:22.041555   14179 oci.go:103] Successfully created a docker volume addons-418049
	I1119 21:47:22.041614   14179 cli_runner.go:164] Run: docker run --rm --name addons-418049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418049 --entrypoint /usr/bin/test -v addons-418049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 21:47:28.497581   14179 cli_runner.go:217] Completed: docker run --rm --name addons-418049-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418049 --entrypoint /usr/bin/test -v addons-418049:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib: (6.455918227s)
	I1119 21:47:28.497615   14179 oci.go:107] Successfully prepared a docker volume addons-418049
	I1119 21:47:28.497660   14179 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:47:28.497671   14179 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 21:47:28.497732   14179 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-418049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 21:47:32.715435   14179 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-418049:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.217651808s)
	I1119 21:47:32.715463   14179 kic.go:203] duration metric: took 4.217789117s to extract preloaded images to volume ...
	W1119 21:47:32.715541   14179 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 21:47:32.715575   14179 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 21:47:32.715611   14179 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 21:47:32.770485   14179 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-418049 --name addons-418049 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-418049 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-418049 --network addons-418049 --ip 192.168.49.2 --volume addons-418049:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 21:47:33.056172   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Running}}
	I1119 21:47:33.076037   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:33.093354   14179 cli_runner.go:164] Run: docker exec addons-418049 stat /var/lib/dpkg/alternatives/iptables
	I1119 21:47:33.136203   14179 oci.go:144] the created container "addons-418049" has a running status.
	I1119 21:47:33.136230   14179 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa...
	I1119 21:47:33.494470   14179 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 21:47:33.518034   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:33.534554   14179 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 21:47:33.534578   14179 kic_runner.go:114] Args: [docker exec --privileged addons-418049 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 21:47:33.575785   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:33.593505   14179 machine.go:94] provisionDockerMachine start ...
	I1119 21:47:33.593609   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:33.610125   14179 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:33.610395   14179 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 21:47:33.610412   14179 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 21:47:33.734878   14179 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-418049
	
	I1119 21:47:33.734905   14179 ubuntu.go:182] provisioning hostname "addons-418049"
	I1119 21:47:33.734970   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:33.751898   14179 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:33.752134   14179 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 21:47:33.752150   14179 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-418049 && echo "addons-418049" | sudo tee /etc/hostname
	I1119 21:47:33.880479   14179 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-418049
	
	I1119 21:47:33.880540   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:33.896943   14179 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:33.897158   14179 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 21:47:33.897182   14179 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-418049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-418049/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-418049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 21:47:34.018465   14179 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 21:47:34.018492   14179 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 21:47:34.018507   14179 ubuntu.go:190] setting up certificates
	I1119 21:47:34.018517   14179 provision.go:84] configureAuth start
	I1119 21:47:34.018570   14179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418049
	I1119 21:47:34.034304   14179 provision.go:143] copyHostCerts
	I1119 21:47:34.034369   14179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 21:47:34.034474   14179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 21:47:34.034535   14179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 21:47:34.034593   14179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.addons-418049 san=[127.0.0.1 192.168.49.2 addons-418049 localhost minikube]
	I1119 21:47:34.211516   14179 provision.go:177] copyRemoteCerts
	I1119 21:47:34.211571   14179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 21:47:34.211601   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.227713   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.316735   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 21:47:34.333750   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1119 21:47:34.348838   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 21:47:34.364214   14179 provision.go:87] duration metric: took 345.687974ms to configureAuth
	I1119 21:47:34.364233   14179 ubuntu.go:206] setting minikube options for container-runtime
	I1119 21:47:34.364378   14179 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:34.364461   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.380455   14179 main.go:143] libmachine: Using SSH client type: native
	I1119 21:47:34.380682   14179 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1119 21:47:34.380706   14179 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 21:47:34.631614   14179 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 21:47:34.631634   14179 machine.go:97] duration metric: took 1.03810264s to provisionDockerMachine
	I1119 21:47:34.631644   14179 client.go:176] duration metric: took 13.298624117s to LocalClient.Create
	I1119 21:47:34.631659   14179 start.go:167] duration metric: took 13.298685832s to libmachine.API.Create "addons-418049"
	I1119 21:47:34.631666   14179 start.go:293] postStartSetup for "addons-418049" (driver="docker")
	I1119 21:47:34.631674   14179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 21:47:34.631722   14179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 21:47:34.631763   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.648292   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.738083   14179 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 21:47:34.741110   14179 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 21:47:34.741146   14179 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 21:47:34.741158   14179 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 21:47:34.741206   14179 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 21:47:34.741228   14179 start.go:296] duration metric: took 109.557672ms for postStartSetup
	I1119 21:47:34.741493   14179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418049
	I1119 21:47:34.757955   14179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/config.json ...
	I1119 21:47:34.758184   14179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 21:47:34.758226   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.774912   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.860860   14179 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 21:47:34.864704   14179 start.go:128] duration metric: took 13.533588406s to createHost
	I1119 21:47:34.864727   14179 start.go:83] releasing machines lock for "addons-418049", held for 13.533730301s
	I1119 21:47:34.864783   14179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-418049
	I1119 21:47:34.880012   14179 ssh_runner.go:195] Run: cat /version.json
	I1119 21:47:34.880057   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.880093   14179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 21:47:34.880151   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:34.897693   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.898103   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:34.984346   14179 ssh_runner.go:195] Run: systemctl --version
	I1119 21:47:35.038433   14179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 21:47:35.069322   14179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 21:47:35.073434   14179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 21:47:35.073485   14179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 21:47:35.096605   14179 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 21:47:35.096625   14179 start.go:496] detecting cgroup driver to use...
	I1119 21:47:35.096653   14179 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 21:47:35.096695   14179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 21:47:35.110693   14179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 21:47:35.121130   14179 docker.go:218] disabling cri-docker service (if available) ...
	I1119 21:47:35.121180   14179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 21:47:35.135283   14179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 21:47:35.150422   14179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 21:47:35.229519   14179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 21:47:35.309677   14179 docker.go:234] disabling docker service ...
	I1119 21:47:35.309725   14179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 21:47:35.325303   14179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 21:47:35.335939   14179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 21:47:35.413423   14179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 21:47:35.491348   14179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 21:47:35.502106   14179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 21:47:35.514167   14179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 21:47:35.514209   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.522868   14179 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 21:47:35.522904   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.530448   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.537840   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.545341   14179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 21:47:35.552306   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.559638   14179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.571226   14179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 21:47:35.578786   14179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 21:47:35.585167   14179 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1119 21:47:35.585211   14179 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1119 21:47:35.595644   14179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 21:47:35.601913   14179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:35.674590   14179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 21:47:35.800685   14179 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 21:47:35.800772   14179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 21:47:35.804387   14179 start.go:564] Will wait 60s for crictl version
	I1119 21:47:35.804428   14179 ssh_runner.go:195] Run: which crictl
	I1119 21:47:35.807616   14179 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 21:47:35.830290   14179 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 21:47:35.830390   14179 ssh_runner.go:195] Run: crio --version
	I1119 21:47:35.855692   14179 ssh_runner.go:195] Run: crio --version
	I1119 21:47:35.881888   14179 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 21:47:35.883004   14179 cli_runner.go:164] Run: docker network inspect addons-418049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 21:47:35.900196   14179 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1119 21:47:35.903731   14179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:47:35.913050   14179 kubeadm.go:884] updating cluster {Name:addons-418049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 21:47:35.913156   14179 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 21:47:35.913195   14179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:47:35.941600   14179 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:47:35.941617   14179 crio.go:433] Images already preloaded, skipping extraction
	I1119 21:47:35.941650   14179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 21:47:35.963758   14179 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 21:47:35.963778   14179 cache_images.go:86] Images are preloaded, skipping loading
	I1119 21:47:35.963788   14179 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1119 21:47:35.963894   14179 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-418049 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 21:47:35.963964   14179 ssh_runner.go:195] Run: crio config
	I1119 21:47:36.004076   14179 cni.go:84] Creating CNI manager for ""
	I1119 21:47:36.004103   14179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:47:36.004121   14179 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 21:47:36.004142   14179 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-418049 NodeName:addons-418049 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 21:47:36.004252   14179 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-418049"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 21:47:36.004302   14179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 21:47:36.011478   14179 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 21:47:36.011549   14179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 21:47:36.018356   14179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1119 21:47:36.029402   14179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 21:47:36.043097   14179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1119 21:47:36.054470   14179 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1119 21:47:36.057591   14179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 21:47:36.066434   14179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:36.145585   14179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:47:36.169638   14179 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049 for IP: 192.168.49.2
	I1119 21:47:36.169658   14179 certs.go:195] generating shared ca certs ...
	I1119 21:47:36.169678   14179 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.169800   14179 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 21:47:36.313386   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt ...
	I1119 21:47:36.313407   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt: {Name:mk8a3ae1f4768e95b44f6ee834507ec0dd5a31b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.313544   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key ...
	I1119 21:47:36.313555   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key: {Name:mk2a77f344d56cbf0fc2983daf73c303614b3719 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.313630   14179 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 21:47:36.539844   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt ...
	I1119 21:47:36.539865   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt: {Name:mk9aa5bf719ebb8ef9775762a12faf372326ce52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.539994   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key ...
	I1119 21:47:36.540004   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key: {Name:mke31cde355fc17855364fbd8b78836671f9a958 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.540068   14179 certs.go:257] generating profile certs ...
	I1119 21:47:36.540126   14179 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.key
	I1119 21:47:36.540140   14179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt with IP's: []
	I1119 21:47:36.718418   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt ...
	I1119 21:47:36.718440   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: {Name:mk869b270e1b1cb84dd4e9178af439e37d4418c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.718577   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.key ...
	I1119 21:47:36.718587   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.key: {Name:mk780689b286c609054031aaf912087fb5f54ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.718655   14179 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key.c8bea405
	I1119 21:47:36.718672   14179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt.c8bea405 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1119 21:47:36.775557   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt.c8bea405 ...
	I1119 21:47:36.775578   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt.c8bea405: {Name:mkeca76dbf33a82f5728e1ce61f80fc8d83990e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.775694   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key.c8bea405 ...
	I1119 21:47:36.775705   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key.c8bea405: {Name:mkeb950e3fd7103b516eda460865e4ee953f9e25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:36.775777   14179 certs.go:382] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt.c8bea405 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt
	I1119 21:47:36.775867   14179 certs.go:386] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key.c8bea405 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key
	I1119 21:47:36.775915   14179 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.key
	I1119 21:47:36.775930   14179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.crt with IP's: []
	I1119 21:47:37.052088   14179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.crt ...
	I1119 21:47:37.052113   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.crt: {Name:mke0e1eed3e89beba161b6bb7f058d3ad91ea73b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:37.052280   14179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.key ...
	I1119 21:47:37.052294   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.key: {Name:mkd565d37c5ba991c22773fb6cd174fca1711be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:37.052486   14179 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 21:47:37.052519   14179 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 21:47:37.052542   14179 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 21:47:37.052564   14179 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 21:47:37.053106   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 21:47:37.070808   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 21:47:37.086766   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 21:47:37.102277   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 21:47:37.117692   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1119 21:47:37.132978   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 21:47:37.148302   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 21:47:37.163379   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1119 21:47:37.178314   14179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 21:47:37.195346   14179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 21:47:37.206314   14179 ssh_runner.go:195] Run: openssl version
	I1119 21:47:37.211907   14179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 21:47:37.221859   14179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:37.225061   14179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:37.225104   14179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 21:47:37.258276   14179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 21:47:37.265507   14179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 21:47:37.268463   14179 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 21:47:37.268509   14179 kubeadm.go:401] StartCluster: {Name:addons-418049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-418049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:47:37.268591   14179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:47:37.268636   14179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:47:37.292769   14179 cri.go:89] found id: ""
	I1119 21:47:37.292809   14179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 21:47:37.299702   14179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 21:47:37.306655   14179 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 21:47:37.306690   14179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 21:47:37.313380   14179 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 21:47:37.313393   14179 kubeadm.go:158] found existing configuration files:
	
	I1119 21:47:37.313418   14179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 21:47:37.319916   14179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 21:47:37.319947   14179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 21:47:37.326196   14179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 21:47:37.332788   14179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 21:47:37.332836   14179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 21:47:37.339086   14179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 21:47:37.345558   14179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 21:47:37.345589   14179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 21:47:37.351956   14179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 21:47:37.358402   14179 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 21:47:37.358434   14179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 21:47:37.364842   14179 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 21:47:37.416101   14179 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 21:47:37.467449   14179 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 21:47:46.089232   14179 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 21:47:46.089315   14179 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 21:47:46.089385   14179 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 21:47:46.089430   14179 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 21:47:46.089459   14179 kubeadm.go:319] OS: Linux
	I1119 21:47:46.089496   14179 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 21:47:46.089535   14179 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 21:47:46.089635   14179 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 21:47:46.089708   14179 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 21:47:46.089755   14179 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 21:47:46.089796   14179 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 21:47:46.089853   14179 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 21:47:46.089891   14179 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 21:47:46.089961   14179 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 21:47:46.090053   14179 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 21:47:46.090187   14179 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 21:47:46.090288   14179 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 21:47:46.091710   14179 out.go:252]   - Generating certificates and keys ...
	I1119 21:47:46.091791   14179 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 21:47:46.091902   14179 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 21:47:46.091998   14179 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 21:47:46.092076   14179 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 21:47:46.092129   14179 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 21:47:46.092172   14179 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 21:47:46.092216   14179 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 21:47:46.092313   14179 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-418049 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 21:47:46.092356   14179 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 21:47:46.092450   14179 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-418049 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1119 21:47:46.092506   14179 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 21:47:46.092556   14179 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 21:47:46.092594   14179 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 21:47:46.092680   14179 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 21:47:46.092762   14179 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 21:47:46.092876   14179 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 21:47:46.092951   14179 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 21:47:46.093047   14179 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 21:47:46.093124   14179 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 21:47:46.093225   14179 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 21:47:46.093329   14179 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 21:47:46.094605   14179 out.go:252]   - Booting up control plane ...
	I1119 21:47:46.094680   14179 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 21:47:46.094760   14179 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 21:47:46.094855   14179 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 21:47:46.094994   14179 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 21:47:46.095097   14179 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 21:47:46.095226   14179 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 21:47:46.095396   14179 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 21:47:46.095451   14179 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 21:47:46.095565   14179 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 21:47:46.095648   14179 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 21:47:46.095715   14179 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.562793ms
	I1119 21:47:46.095829   14179 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 21:47:46.095943   14179 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1119 21:47:46.096016   14179 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 21:47:46.096103   14179 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 21:47:46.096219   14179 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.091629085s
	I1119 21:47:46.096284   14179 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.111984653s
	I1119 21:47:46.096398   14179 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.500894283s
	I1119 21:47:46.096502   14179 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 21:47:46.096608   14179 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 21:47:46.096658   14179 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 21:47:46.096910   14179 kubeadm.go:319] [mark-control-plane] Marking the node addons-418049 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 21:47:46.097001   14179 kubeadm.go:319] [bootstrap-token] Using token: rnz4hq.hop80trcclzl6sbi
	I1119 21:47:46.098350   14179 out.go:252]   - Configuring RBAC rules ...
	I1119 21:47:46.098464   14179 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 21:47:46.098573   14179 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 21:47:46.098726   14179 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 21:47:46.098859   14179 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 21:47:46.098963   14179 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 21:47:46.099041   14179 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 21:47:46.099142   14179 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 21:47:46.099186   14179 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 21:47:46.099242   14179 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 21:47:46.099251   14179 kubeadm.go:319] 
	I1119 21:47:46.099310   14179 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 21:47:46.099319   14179 kubeadm.go:319] 
	I1119 21:47:46.099416   14179 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 21:47:46.099426   14179 kubeadm.go:319] 
	I1119 21:47:46.099468   14179 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 21:47:46.099540   14179 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 21:47:46.099591   14179 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 21:47:46.099597   14179 kubeadm.go:319] 
	I1119 21:47:46.099642   14179 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 21:47:46.099648   14179 kubeadm.go:319] 
	I1119 21:47:46.099692   14179 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 21:47:46.099698   14179 kubeadm.go:319] 
	I1119 21:47:46.099763   14179 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 21:47:46.099873   14179 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 21:47:46.099969   14179 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 21:47:46.099978   14179 kubeadm.go:319] 
	I1119 21:47:46.100097   14179 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 21:47:46.100196   14179 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 21:47:46.100203   14179 kubeadm.go:319] 
	I1119 21:47:46.100292   14179 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token rnz4hq.hop80trcclzl6sbi \
	I1119 21:47:46.100430   14179 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b \
	I1119 21:47:46.100551   14179 kubeadm.go:319] 	--control-plane 
	I1119 21:47:46.100564   14179 kubeadm.go:319] 
	I1119 21:47:46.100673   14179 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 21:47:46.100680   14179 kubeadm.go:319] 
	I1119 21:47:46.100751   14179 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token rnz4hq.hop80trcclzl6sbi \
	I1119 21:47:46.100870   14179 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b 
	I1119 21:47:46.100881   14179 cni.go:84] Creating CNI manager for ""
	I1119 21:47:46.100891   14179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:47:46.102183   14179 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 21:47:46.103242   14179 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 21:47:46.107193   14179 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 21:47:46.107206   14179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 21:47:46.119501   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 21:47:46.304890   14179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 21:47:46.304934   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:46.304971   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-418049 minikube.k8s.io/updated_at=2025_11_19T21_47_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=addons-418049 minikube.k8s.io/primary=true
	I1119 21:47:46.314516   14179 ops.go:34] apiserver oom_adj: -16
	I1119 21:47:46.375125   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:46.875341   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:47.375428   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:47.875777   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:48.375874   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:48.876061   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:49.375533   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:49.875359   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:50.375741   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:50.875277   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:51.375767   14179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 21:47:51.433199   14179 kubeadm.go:1114] duration metric: took 5.128303504s to wait for elevateKubeSystemPrivileges
	I1119 21:47:51.433237   14179 kubeadm.go:403] duration metric: took 14.164733011s to StartCluster
	I1119 21:47:51.433258   14179 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:51.433369   14179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:47:51.433737   14179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:51.433938   14179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 21:47:51.433966   14179 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 21:47:51.434038   14179 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1119 21:47:51.434140   14179 addons.go:70] Setting yakd=true in profile "addons-418049"
	I1119 21:47:51.434147   14179 addons.go:70] Setting inspektor-gadget=true in profile "addons-418049"
	I1119 21:47:51.434165   14179 addons.go:239] Setting addon inspektor-gadget=true in "addons-418049"
	I1119 21:47:51.434171   14179 addons.go:70] Setting storage-provisioner=true in profile "addons-418049"
	I1119 21:47:51.434183   14179 addons.go:239] Setting addon storage-provisioner=true in "addons-418049"
	I1119 21:47:51.434201   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434201   14179 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:51.434199   14179 addons.go:70] Setting volcano=true in profile "addons-418049"
	I1119 21:47:51.434222   14179 addons.go:70] Setting volumesnapshots=true in profile "addons-418049"
	I1119 21:47:51.434235   14179 addons.go:239] Setting addon volumesnapshots=true in "addons-418049"
	I1119 21:47:51.434218   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434250   14179 addons.go:70] Setting default-storageclass=true in profile "addons-418049"
	I1119 21:47:51.434257   14179 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-418049"
	I1119 21:47:51.434268   14179 addons.go:70] Setting cloud-spanner=true in profile "addons-418049"
	I1119 21:47:51.434273   14179 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-418049"
	I1119 21:47:51.434278   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434280   14179 addons.go:70] Setting registry=true in profile "addons-418049"
	I1119 21:47:51.434290   14179 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-418049"
	I1119 21:47:51.434297   14179 addons.go:239] Setting addon registry=true in "addons-418049"
	I1119 21:47:51.434302   14179 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-418049"
	I1119 21:47:51.434329   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434339   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434598   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434740   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434777   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434789   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434802   14179 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-418049"
	I1119 21:47:51.434805   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434860   14179 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-418049"
	I1119 21:47:51.434251   14179 addons.go:70] Setting metrics-server=true in profile "addons-418049"
	I1119 21:47:51.434933   14179 addons.go:70] Setting ingress=true in profile "addons-418049"
	I1119 21:47:51.434998   14179 addons.go:239] Setting addon ingress=true in "addons-418049"
	I1119 21:47:51.435065   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434789   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434276   14179 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-418049"
	I1119 21:47:51.435359   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.435728   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.435838   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.436039   14179 out.go:179] * Verifying Kubernetes components...
	I1119 21:47:51.434165   14179 addons.go:239] Setting addon yakd=true in "addons-418049"
	I1119 21:47:51.436110   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.436576   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434199   14179 addons.go:70] Setting registry-creds=true in profile "addons-418049"
	I1119 21:47:51.437871   14179 addons.go:239] Setting addon registry-creds=true in "addons-418049"
	I1119 21:47:51.437899   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.438368   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434952   14179 addons.go:239] Setting addon metrics-server=true in "addons-418049"
	I1119 21:47:51.442503   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.434241   14179 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-418049"
	I1119 21:47:51.442952   14179 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-418049"
	I1119 21:47:51.443022   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434286   14179 addons.go:239] Setting addon cloud-spanner=true in "addons-418049"
	I1119 21:47:51.434895   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.443089   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.443254   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434236   14179 addons.go:239] Setting addon volcano=true in "addons-418049"
	I1119 21:47:51.444495   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.444846   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.444976   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.443346   14179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 21:47:51.434971   14179 addons.go:70] Setting ingress-dns=true in profile "addons-418049"
	I1119 21:47:51.445278   14179 addons.go:239] Setting addon ingress-dns=true in "addons-418049"
	I1119 21:47:51.445322   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.445748   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.434961   14179 addons.go:70] Setting gcp-auth=true in profile "addons-418049"
	I1119 21:47:51.446087   14179 mustload.go:66] Loading cluster: addons-418049
	I1119 21:47:51.444089   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.446291   14179 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:47:51.446542   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.476652   14179 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1119 21:47:51.479384   14179 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:47:51.479409   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1119 21:47:51.479471   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.486911   14179 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1119 21:47:51.488753   14179 out.go:179]   - Using image docker.io/registry:3.0.0
	I1119 21:47:51.493921   14179 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1119 21:47:51.493951   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1119 21:47:51.494031   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.497561   14179 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1119 21:47:51.499209   14179 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1119 21:47:51.499228   14179 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1119 21:47:51.499298   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.512276   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1119 21:47:51.514631   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1119 21:47:51.514690   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1119 21:47:51.515923   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1119 21:47:51.515964   14179 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1119 21:47:51.516035   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.516974   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1119 21:47:51.518688   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1119 21:47:51.520780   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1119 21:47:51.522744   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1119 21:47:51.523506   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.529874   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1119 21:47:51.531311   14179 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W1119 21:47:51.536366   14179 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1119 21:47:51.536651   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1119 21:47:51.537988   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1119 21:47:51.538079   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.536907   14179 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1119 21:47:51.536937   14179 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1119 21:47:51.539624   14179 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1119 21:47:51.539646   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1119 21:47:51.539692   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.539843   14179 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1119 21:47:51.539892   14179 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:47:51.539906   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1119 21:47:51.539960   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.542836   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1119 21:47:51.542853   14179 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1119 21:47:51.542914   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.544992   14179 addons.go:239] Setting addon default-storageclass=true in "addons-418049"
	I1119 21:47:51.545038   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.545523   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.546597   14179 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1119 21:47:51.548132   14179 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:47:51.548152   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1119 21:47:51.548197   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.553990   14179 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1119 21:47:51.557105   14179 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:47:51.557129   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1119 21:47:51.557180   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.558008   14179 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-418049"
	I1119 21:47:51.558048   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:51.558545   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:51.558763   14179 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 21:47:51.560458   14179 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:47:51.560482   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 21:47:51.560531   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.561753   14179 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1119 21:47:51.563156   14179 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:47:51.563543   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1119 21:47:51.563708   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.568325   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.572919   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.573399   14179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 21:47:51.574837   14179 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1119 21:47:51.578840   14179 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:47:51.584779   14179 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:47:51.591307   14179 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:47:51.591331   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1119 21:47:51.591389   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.615251   14179 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1119 21:47:51.617926   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.620223   14179 out.go:179]   - Using image docker.io/busybox:stable
	I1119 21:47:51.621736   14179 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:47:51.621780   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1119 21:47:51.621859   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.636123   14179 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 21:47:51.636145   14179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 21:47:51.636167   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.636205   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:51.636116   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.637467   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.638958   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.640986   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.641209   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.646124   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.646292   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.649313   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.651999   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	W1119 21:47:51.654093   14179 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1119 21:47:51.654141   14179 retry.go:31] will retry after 233.822989ms: ssh: handshake failed: EOF
	I1119 21:47:51.663633   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:51.687989   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	W1119 21:47:51.689021   14179 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1119 21:47:51.689043   14179 retry.go:31] will retry after 194.998612ms: ssh: handshake failed: EOF
	I1119 21:47:51.698422   14179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 21:47:51.749970   14179 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1119 21:47:51.749994   14179 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1119 21:47:51.763610   14179 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1119 21:47:51.763637   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1119 21:47:51.771494   14179 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:47:51.771516   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1119 21:47:51.776124   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1119 21:47:51.780015   14179 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1119 21:47:51.780033   14179 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1119 21:47:51.787318   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1119 21:47:51.797225   14179 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:47:51.797245   14179 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1119 21:47:51.817279   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1119 21:47:51.817304   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1119 21:47:51.828824   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1119 21:47:51.829168   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1119 21:47:51.831850   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1119 21:47:51.842011   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1119 21:47:51.842319   14179 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1119 21:47:51.842368   14179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1119 21:47:51.842857   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1119 21:47:51.842988   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1119 21:47:51.843929   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1119 21:47:51.843985   14179 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1119 21:47:51.850772   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 21:47:51.851282   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1119 21:47:51.863090   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1119 21:47:51.863161   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1119 21:47:51.878465   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1119 21:47:51.878489   14179 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1119 21:47:51.896442   14179 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1119 21:47:51.896467   14179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1119 21:47:51.903780   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1119 21:47:51.903803   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1119 21:47:51.915790   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1119 21:47:51.915824   14179 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1119 21:47:51.934062   14179 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1119 21:47:51.934098   14179 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1119 21:47:51.934756   14179 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1119 21:47:51.935916   14179 node_ready.go:35] waiting up to 6m0s for node "addons-418049" to be "Ready" ...
	I1119 21:47:51.961578   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1119 21:47:51.961607   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1119 21:47:51.964859   14179 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:47:51.964879   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1119 21:47:51.995699   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1119 21:47:51.995726   14179 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1119 21:47:52.002371   14179 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1119 21:47:52.002395   14179 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1119 21:47:52.022319   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1119 21:47:52.055461   14179 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:47:52.055486   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1119 21:47:52.069329   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1119 21:47:52.069349   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1119 21:47:52.085134   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 21:47:52.108552   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:47:52.126805   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1119 21:47:52.126843   14179 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1119 21:47:52.149192   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1119 21:47:52.175650   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1119 21:47:52.175739   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1119 21:47:52.243607   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1119 21:47:52.243632   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1119 21:47:52.275580   14179 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 21:47:52.275773   14179 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1119 21:47:52.313952   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1119 21:47:52.457026   14179 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-418049" context rescaled to 1 replicas
	I1119 21:47:52.480007   14179 addons.go:480] Verifying addon registry=true in "addons-418049"
	I1119 21:47:52.482063   14179 out.go:179] * Verifying registry addon...
	I1119 21:47:52.483901   14179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1119 21:47:52.487582   14179 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:47:52.487645   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:52.699570   14179 addons.go:480] Verifying addon metrics-server=true in "addons-418049"
	I1119 21:47:52.921245   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.079191315s)
	I1119 21:47:52.921276   14179 addons.go:480] Verifying addon ingress=true in "addons-418049"
	I1119 21:47:52.921314   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.078429177s)
	I1119 21:47:52.921427   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.078421968s)
	I1119 21:47:52.921556   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.070659647s)
	I1119 21:47:52.921565   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.070261817s)
	I1119 21:47:52.922831   14179 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-418049 service yakd-dashboard -n yakd-dashboard
	
	I1119 21:47:52.922973   14179 out.go:179] * Verifying ingress addon...
	I1119 21:47:52.924687   14179 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1119 21:47:52.926868   14179 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 21:47:53.027173   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:53.399515   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.290920232s)
	W1119 21:47:53.399576   14179 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:47:53.399607   14179 retry.go:31] will retry after 170.940812ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1119 21:47:53.399605   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.250328716s)
	I1119 21:47:53.399845   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.085834597s)
	I1119 21:47:53.399878   14179 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-418049"
	I1119 21:47:53.402890   14179 out.go:179] * Verifying csi-hostpath-driver addon...
	I1119 21:47:53.405230   14179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1119 21:47:53.407348   14179 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:47:53.407368   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:53.427623   14179 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1119 21:47:53.427640   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:53.485616   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:53.570963   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1119 21:47:53.907721   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:53.927442   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:47:53.938110   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:47:54.008040   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:54.408225   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:54.427036   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:54.486264   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:54.908343   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:54.927273   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:55.008380   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:55.407833   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:55.427771   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:55.485877   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:55.908348   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:55.926800   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:47:55.938588   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:47:56.008623   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:56.010352   14179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.439352015s)
	I1119 21:47:56.407363   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:56.427008   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:56.486010   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:56.908281   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:56.927008   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:57.008719   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:57.407429   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:57.427345   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:57.486669   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:57.907546   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:57.927600   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:58.008011   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:58.408274   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:58.427039   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:47:58.437554   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:47:58.486118   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:58.908208   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:58.927225   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:59.009113   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:59.139671   14179 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1119 21:47:59.139726   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:59.157301   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:59.251570   14179 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1119 21:47:59.263203   14179 addons.go:239] Setting addon gcp-auth=true in "addons-418049"
	I1119 21:47:59.263241   14179 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:47:59.263582   14179 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:47:59.280884   14179 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1119 21:47:59.280922   14179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:47:59.296834   14179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:47:59.384932   14179 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1119 21:47:59.386054   14179 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1119 21:47:59.387034   14179 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1119 21:47:59.387046   14179 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1119 21:47:59.398716   14179 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1119 21:47:59.398732   14179 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1119 21:47:59.408448   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:59.410822   14179 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:47:59.410840   14179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1119 21:47:59.422146   14179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1119 21:47:59.427127   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:59.487157   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:47:59.698250   14179 addons.go:480] Verifying addon gcp-auth=true in "addons-418049"
	I1119 21:47:59.699514   14179 out.go:179] * Verifying gcp-auth addon...
	I1119 21:47:59.701312   14179 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1119 21:47:59.703453   14179 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1119 21:47:59.703471   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:47:59.907626   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:47:59.928045   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:47:59.986369   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:00.203334   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:00.407468   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:00.427402   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:00.437953   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:00.486455   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:00.703993   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:00.908152   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:00.927073   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:00.986250   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:01.203233   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:01.407489   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:01.427422   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:01.485573   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:01.703938   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:01.908043   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:01.926971   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:01.986404   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:02.203515   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:02.407956   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:02.426763   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:02.438207   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:02.485681   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:02.703939   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:02.908191   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:02.927268   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:02.985613   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:03.203987   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:03.407946   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:03.426888   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:03.486090   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:03.704060   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:03.908137   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:03.926988   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:03.986627   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:04.203809   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:04.408039   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:04.426906   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:04.438320   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:04.485998   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:04.705405   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:04.907787   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:04.927426   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:04.985758   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:05.204177   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:05.408285   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:05.427039   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:05.486157   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:05.704752   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:05.907894   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:05.926618   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:05.985995   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:06.204126   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:06.408544   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:06.427290   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:06.486527   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:06.703578   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:06.907974   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:06.926874   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:06.938461   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:06.986040   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:07.204440   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:07.407750   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:07.427774   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:07.485970   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:07.704195   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:07.907279   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:07.927103   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:07.986356   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:08.203197   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:08.407323   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:08.427108   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:08.486191   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:08.703499   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:08.908187   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:08.927089   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:08.986253   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:09.203568   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:09.407904   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:09.426744   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:09.438292   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:09.485876   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:09.703974   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:09.908179   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:09.927098   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:09.986444   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:10.203852   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:10.408564   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:10.427606   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:10.486654   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:10.704017   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:10.908579   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:10.927924   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:10.986407   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:11.203841   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:11.408139   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:11.427098   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:11.486346   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:11.703732   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:11.908055   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:11.926974   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:11.938720   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:11.986322   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:12.203400   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:12.407690   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:12.427495   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:12.485781   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:12.703797   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:12.908022   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:12.926884   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:12.986145   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:13.204211   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:13.407565   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:13.427917   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:13.486094   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:13.704363   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:13.907347   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:13.927123   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:13.986789   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:14.204012   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:14.408496   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:14.427294   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:14.438059   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:14.486474   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:14.703491   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:14.908130   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:14.926897   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:14.985932   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:15.204339   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:15.407503   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:15.427378   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:15.485544   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:15.703619   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:15.907882   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:15.926515   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:15.985977   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:16.204239   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:16.407684   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:16.427525   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:16.438091   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:16.485758   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:16.703854   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:16.907947   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:16.926618   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:16.985730   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:17.203906   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:17.408007   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:17.426862   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:17.486187   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:17.704355   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:17.907402   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:17.927110   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:17.986093   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:18.204209   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:18.407158   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:18.427003   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:18.486124   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:18.704091   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:18.908161   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:18.927201   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:18.937835   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:18.986566   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:19.203753   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:19.407672   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:19.427506   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:19.485730   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:19.703705   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:19.907988   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:19.926744   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:19.985857   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:20.203939   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:20.407991   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:20.426769   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:20.485863   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:20.703868   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:20.907915   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:20.926823   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:20.938449   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:20.986060   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:21.204003   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:21.408117   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:21.427030   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:21.486208   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:21.703322   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:21.907249   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:21.926978   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:21.986042   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:22.204583   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:22.407797   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:22.426773   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:22.486279   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:22.703328   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:22.907309   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:22.927230   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:22.986356   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:23.203281   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:23.407251   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:23.427168   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:23.437593   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:23.486159   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:23.703978   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:23.908100   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:23.926918   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:23.986221   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:24.203418   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:24.408014   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:24.426850   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:24.486263   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:24.703465   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:24.907778   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:24.927762   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:24.985927   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:25.204013   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:25.408261   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:25.427204   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:25.438063   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:25.485576   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:25.703694   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:25.907915   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:25.926630   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:25.985731   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:26.203788   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:26.407893   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:26.426715   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:26.485966   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:26.704287   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:26.907563   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:26.927445   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:26.985775   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:27.203994   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:27.408197   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:27.426972   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:27.438493   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:27.486054   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:27.704149   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:27.907455   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:27.927356   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:27.986495   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:28.203510   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:28.407402   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:28.427198   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:28.486394   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:28.703461   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:28.907642   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:28.927595   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:28.985902   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:29.204258   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:29.407397   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:29.427327   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:29.485674   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:29.703830   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:29.907962   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:29.926910   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:29.938592   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:29.986233   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:30.203494   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:30.407586   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:30.427424   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:30.485719   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:30.703739   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:30.907990   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:30.926789   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:30.986145   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:31.204322   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:31.407378   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:31.427237   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:31.486510   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:31.703735   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:31.907747   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:31.928117   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:31.986214   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:32.203122   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:32.408620   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:32.427378   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1119 21:48:32.438118   14179 node_ready.go:57] node "addons-418049" has "Ready":"False" status (will retry)
	I1119 21:48:32.485839   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:32.703921   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:32.908298   14179 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1119 21:48:32.908326   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:32.927297   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:32.937774   14179 node_ready.go:49] node "addons-418049" is "Ready"
	I1119 21:48:32.937793   14179 node_ready.go:38] duration metric: took 41.00185866s for node "addons-418049" to be "Ready" ...
	I1119 21:48:32.937807   14179 api_server.go:52] waiting for apiserver process to appear ...
	I1119 21:48:32.937871   14179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 21:48:32.953495   14179 api_server.go:72] duration metric: took 41.519496566s to wait for apiserver process to appear ...
	I1119 21:48:32.953520   14179 api_server.go:88] waiting for apiserver healthz status ...
	I1119 21:48:32.953541   14179 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1119 21:48:32.957992   14179 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1119 21:48:32.958879   14179 api_server.go:141] control plane version: v1.34.1
	I1119 21:48:32.958907   14179 api_server.go:131] duration metric: took 5.379808ms to wait for apiserver health ...
	I1119 21:48:32.958918   14179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 21:48:32.962162   14179 system_pods.go:59] 20 kube-system pods found
	I1119 21:48:32.962188   14179 system_pods.go:61] "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:32.962195   14179 system_pods.go:61] "coredns-66bc5c9577-7v6rp" [025a1fd1-54b8-4c2a-9396-a314e9e9ce42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:32.962202   14179 system_pods.go:61] "csi-hostpath-attacher-0" [5cb09132-5e1a-4574-a6a5-51703af7e782] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:32.962207   14179 system_pods.go:61] "csi-hostpath-resizer-0" [65e11f74-ff7f-4f74-97a3-38ad91e62f43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:32.962213   14179 system_pods.go:61] "csi-hostpathplugin-2mv8p" [afa72be6-aaa8-49bb-be6d-f69f6ab56d62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:32.962217   14179 system_pods.go:61] "etcd-addons-418049" [d62d961a-4ff4-4725-8117-7980a03c4db6] Running
	I1119 21:48:32.962221   14179 system_pods.go:61] "kindnet-52bj8" [2737a8dd-e93f-431c-be31-f1b22dce9519] Running
	I1119 21:48:32.962225   14179 system_pods.go:61] "kube-apiserver-addons-418049" [f7b37bb1-9c05-41f7-b76d-fa279ef5e122] Running
	I1119 21:48:32.962228   14179 system_pods.go:61] "kube-controller-manager-addons-418049" [5b61c8e6-3136-4a52-b7cd-be38ac892b5f] Running
	I1119 21:48:32.962233   14179 system_pods.go:61] "kube-ingress-dns-minikube" [ce2e729e-662a-477b-a9ff-ee58569a350d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:32.962239   14179 system_pods.go:61] "kube-proxy-8rrhm" [ea9ea337-5c88-4577-b868-82aaa0234723] Running
	I1119 21:48:32.962242   14179 system_pods.go:61] "kube-scheduler-addons-418049" [0006366e-06a4-402a-a25b-dab34f284544] Running
	I1119 21:48:32.962247   14179 system_pods.go:61] "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:32.962254   14179 system_pods.go:61] "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:32.962262   14179 system_pods.go:61] "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:32.962267   14179 system_pods.go:61] "registry-creds-764b6fb674-j5lrp" [eefdd28e-9cfa-4e4a-8c18-ecececdc9c06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:32.962272   14179 system_pods.go:61] "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:32.962279   14179 system_pods.go:61] "snapshot-controller-7d9fbc56b8-knb29" [1f62ae02-aa23-4fae-b7dd-44e2385bce74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:32.962283   14179 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rcvt9" [718fcfab-fbf5-46f9-84ed-a5b10d561277] Pending
	I1119 21:48:32.962288   14179 system_pods.go:61] "storage-provisioner" [147f8cf8-e9ef-4b12-afd8-1fbb995db186] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:48:32.962292   14179 system_pods.go:74] duration metric: took 3.369733ms to wait for pod list to return data ...
	I1119 21:48:32.962300   14179 default_sa.go:34] waiting for default service account to be created ...
	I1119 21:48:32.964060   14179 default_sa.go:45] found service account: "default"
	I1119 21:48:32.964079   14179 default_sa.go:55] duration metric: took 1.773697ms for default service account to be created ...
	I1119 21:48:32.964088   14179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 21:48:32.967168   14179 system_pods.go:86] 20 kube-system pods found
	I1119 21:48:32.967197   14179 system_pods.go:89] "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:32.967208   14179 system_pods.go:89] "coredns-66bc5c9577-7v6rp" [025a1fd1-54b8-4c2a-9396-a314e9e9ce42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:32.967220   14179 system_pods.go:89] "csi-hostpath-attacher-0" [5cb09132-5e1a-4574-a6a5-51703af7e782] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:32.967230   14179 system_pods.go:89] "csi-hostpath-resizer-0" [65e11f74-ff7f-4f74-97a3-38ad91e62f43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:32.967243   14179 system_pods.go:89] "csi-hostpathplugin-2mv8p" [afa72be6-aaa8-49bb-be6d-f69f6ab56d62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:32.967251   14179 system_pods.go:89] "etcd-addons-418049" [d62d961a-4ff4-4725-8117-7980a03c4db6] Running
	I1119 21:48:32.967258   14179 system_pods.go:89] "kindnet-52bj8" [2737a8dd-e93f-431c-be31-f1b22dce9519] Running
	I1119 21:48:32.967267   14179 system_pods.go:89] "kube-apiserver-addons-418049" [f7b37bb1-9c05-41f7-b76d-fa279ef5e122] Running
	I1119 21:48:32.967272   14179 system_pods.go:89] "kube-controller-manager-addons-418049" [5b61c8e6-3136-4a52-b7cd-be38ac892b5f] Running
	I1119 21:48:32.967287   14179 system_pods.go:89] "kube-ingress-dns-minikube" [ce2e729e-662a-477b-a9ff-ee58569a350d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:32.967292   14179 system_pods.go:89] "kube-proxy-8rrhm" [ea9ea337-5c88-4577-b868-82aaa0234723] Running
	I1119 21:48:32.967298   14179 system_pods.go:89] "kube-scheduler-addons-418049" [0006366e-06a4-402a-a25b-dab34f284544] Running
	I1119 21:48:32.967305   14179 system_pods.go:89] "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:32.967313   14179 system_pods.go:89] "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:32.967322   14179 system_pods.go:89] "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:32.967330   14179 system_pods.go:89] "registry-creds-764b6fb674-j5lrp" [eefdd28e-9cfa-4e4a-8c18-ecececdc9c06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:32.967340   14179 system_pods.go:89] "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:32.967348   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knb29" [1f62ae02-aa23-4fae-b7dd-44e2385bce74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:32.967356   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rcvt9" [718fcfab-fbf5-46f9-84ed-a5b10d561277] Pending
	I1119 21:48:32.967364   14179 system_pods.go:89] "storage-provisioner" [147f8cf8-e9ef-4b12-afd8-1fbb995db186] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:48:32.967383   14179 retry.go:31] will retry after 196.651691ms: missing components: kube-dns
	I1119 21:48:32.987371   14179 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1119 21:48:32.987391   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:33.173708   14179 system_pods.go:86] 20 kube-system pods found
	I1119 21:48:33.173747   14179 system_pods.go:89] "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:33.173757   14179 system_pods.go:89] "coredns-66bc5c9577-7v6rp" [025a1fd1-54b8-4c2a-9396-a314e9e9ce42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 21:48:33.173766   14179 system_pods.go:89] "csi-hostpath-attacher-0" [5cb09132-5e1a-4574-a6a5-51703af7e782] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:33.173773   14179 system_pods.go:89] "csi-hostpath-resizer-0" [65e11f74-ff7f-4f74-97a3-38ad91e62f43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:33.173782   14179 system_pods.go:89] "csi-hostpathplugin-2mv8p" [afa72be6-aaa8-49bb-be6d-f69f6ab56d62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:33.173787   14179 system_pods.go:89] "etcd-addons-418049" [d62d961a-4ff4-4725-8117-7980a03c4db6] Running
	I1119 21:48:33.173793   14179 system_pods.go:89] "kindnet-52bj8" [2737a8dd-e93f-431c-be31-f1b22dce9519] Running
	I1119 21:48:33.173798   14179 system_pods.go:89] "kube-apiserver-addons-418049" [f7b37bb1-9c05-41f7-b76d-fa279ef5e122] Running
	I1119 21:48:33.173804   14179 system_pods.go:89] "kube-controller-manager-addons-418049" [5b61c8e6-3136-4a52-b7cd-be38ac892b5f] Running
	I1119 21:48:33.173831   14179 system_pods.go:89] "kube-ingress-dns-minikube" [ce2e729e-662a-477b-a9ff-ee58569a350d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:33.173838   14179 system_pods.go:89] "kube-proxy-8rrhm" [ea9ea337-5c88-4577-b868-82aaa0234723] Running
	I1119 21:48:33.173844   14179 system_pods.go:89] "kube-scheduler-addons-418049" [0006366e-06a4-402a-a25b-dab34f284544] Running
	I1119 21:48:33.173851   14179 system_pods.go:89] "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:33.173859   14179 system_pods.go:89] "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:33.173869   14179 system_pods.go:89] "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:33.173930   14179 system_pods.go:89] "registry-creds-764b6fb674-j5lrp" [eefdd28e-9cfa-4e4a-8c18-ecececdc9c06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:33.173947   14179 system_pods.go:89] "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:33.173955   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knb29" [1f62ae02-aa23-4fae-b7dd-44e2385bce74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:33.173964   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rcvt9" [718fcfab-fbf5-46f9-84ed-a5b10d561277] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:33.173971   14179 system_pods.go:89] "storage-provisioner" [147f8cf8-e9ef-4b12-afd8-1fbb995db186] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 21:48:33.173991   14179 retry.go:31] will retry after 371.854576ms: missing components: kube-dns
	I1119 21:48:33.270020   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:33.408919   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:33.427895   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:33.508730   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:33.549486   14179 system_pods.go:86] 20 kube-system pods found
	I1119 21:48:33.549515   14179 system_pods.go:89] "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1119 21:48:33.549521   14179 system_pods.go:89] "coredns-66bc5c9577-7v6rp" [025a1fd1-54b8-4c2a-9396-a314e9e9ce42] Running
	I1119 21:48:33.549528   14179 system_pods.go:89] "csi-hostpath-attacher-0" [5cb09132-5e1a-4574-a6a5-51703af7e782] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1119 21:48:33.549533   14179 system_pods.go:89] "csi-hostpath-resizer-0" [65e11f74-ff7f-4f74-97a3-38ad91e62f43] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1119 21:48:33.549540   14179 system_pods.go:89] "csi-hostpathplugin-2mv8p" [afa72be6-aaa8-49bb-be6d-f69f6ab56d62] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1119 21:48:33.549545   14179 system_pods.go:89] "etcd-addons-418049" [d62d961a-4ff4-4725-8117-7980a03c4db6] Running
	I1119 21:48:33.549549   14179 system_pods.go:89] "kindnet-52bj8" [2737a8dd-e93f-431c-be31-f1b22dce9519] Running
	I1119 21:48:33.549553   14179 system_pods.go:89] "kube-apiserver-addons-418049" [f7b37bb1-9c05-41f7-b76d-fa279ef5e122] Running
	I1119 21:48:33.549556   14179 system_pods.go:89] "kube-controller-manager-addons-418049" [5b61c8e6-3136-4a52-b7cd-be38ac892b5f] Running
	I1119 21:48:33.549564   14179 system_pods.go:89] "kube-ingress-dns-minikube" [ce2e729e-662a-477b-a9ff-ee58569a350d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1119 21:48:33.549568   14179 system_pods.go:89] "kube-proxy-8rrhm" [ea9ea337-5c88-4577-b868-82aaa0234723] Running
	I1119 21:48:33.549572   14179 system_pods.go:89] "kube-scheduler-addons-418049" [0006366e-06a4-402a-a25b-dab34f284544] Running
	I1119 21:48:33.549581   14179 system_pods.go:89] "metrics-server-85b7d694d7-ggkmz" [d454f251-2d9d-4a61-a3a8-4aa052b74bf1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1119 21:48:33.549586   14179 system_pods.go:89] "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1119 21:48:33.549592   14179 system_pods.go:89] "registry-6b586f9694-7pv4f" [c273eed6-5720-40a2-aac8-2149c492c58d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1119 21:48:33.549597   14179 system_pods.go:89] "registry-creds-764b6fb674-j5lrp" [eefdd28e-9cfa-4e4a-8c18-ecececdc9c06] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1119 21:48:33.549603   14179 system_pods.go:89] "registry-proxy-znvmk" [5769f042-e96d-4796-9313-85e723be54a5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1119 21:48:33.549607   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-knb29" [1f62ae02-aa23-4fae-b7dd-44e2385bce74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:33.549615   14179 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rcvt9" [718fcfab-fbf5-46f9-84ed-a5b10d561277] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1119 21:48:33.549619   14179 system_pods.go:89] "storage-provisioner" [147f8cf8-e9ef-4b12-afd8-1fbb995db186] Running
	I1119 21:48:33.549626   14179 system_pods.go:126] duration metric: took 585.532417ms to wait for k8s-apps to be running ...
	I1119 21:48:33.549635   14179 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 21:48:33.549671   14179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 21:48:33.562196   14179 system_svc.go:56] duration metric: took 12.554401ms WaitForService to wait for kubelet
	I1119 21:48:33.562225   14179 kubeadm.go:587] duration metric: took 42.128229298s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 21:48:33.562248   14179 node_conditions.go:102] verifying NodePressure condition ...
	I1119 21:48:33.564114   14179 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 21:48:33.564136   14179 node_conditions.go:123] node cpu capacity is 8
	I1119 21:48:33.564148   14179 node_conditions.go:105] duration metric: took 1.895747ms to run NodePressure ...
	I1119 21:48:33.564159   14179 start.go:242] waiting for startup goroutines ...
	I1119 21:48:33.704418   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:33.908257   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:33.927977   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:33.989207   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:34.204676   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:34.408710   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:34.428273   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:34.486791   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:34.704617   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:34.909062   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:34.928163   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:34.986125   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:35.205170   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:35.410148   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:35.427570   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:35.487144   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:35.704966   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:35.908888   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:35.927237   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:35.986372   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:36.204180   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:36.409135   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:36.427440   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:36.486509   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:36.704475   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:36.908551   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:36.927963   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:36.987587   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:37.204388   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:37.408796   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:37.428171   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:37.486839   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:37.704908   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:37.908965   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:37.927355   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:37.987104   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:38.204638   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:38.407939   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:38.426879   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:38.486915   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:38.704558   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:38.908871   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:38.928510   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:38.986771   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:39.204810   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:39.409687   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:39.428310   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:39.486827   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:39.704560   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:39.908599   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:39.928760   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:39.987028   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:40.205259   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:40.408526   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:40.429184   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:40.488184   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:40.705329   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:40.908298   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:40.927386   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:40.986478   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:41.204560   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:41.408848   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:41.428221   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:41.511992   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:41.705101   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:41.909296   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:41.928081   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:41.987434   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:42.204490   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:42.408112   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:42.427173   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:42.486308   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:42.703637   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:42.908500   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:42.928045   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:42.987023   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:43.204836   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:43.409392   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:43.427926   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:43.529117   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:43.704883   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:43.908797   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:43.928099   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:43.986202   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:44.204550   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:44.408605   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:44.427985   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:44.487747   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:44.704371   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:44.967496   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:44.967508   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:44.986231   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:45.203762   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:45.408353   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:45.427230   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:45.527274   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:45.706042   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:45.907636   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:45.927658   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:45.987093   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:46.205151   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:46.409120   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:46.427227   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:46.486509   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:46.703976   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:46.908749   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:46.927183   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:46.986894   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:47.204716   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:47.408845   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:47.428548   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:47.528714   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:47.703984   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:47.908804   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:47.928321   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:47.986546   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:48.204202   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:48.407652   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:48.427587   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:48.507784   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:48.704912   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:48.909258   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:48.927386   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:48.986478   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:49.204306   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:49.408179   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:49.427086   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:49.508693   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:49.703889   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:49.908426   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:49.927684   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:49.987010   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:50.204660   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:50.408218   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:50.427076   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:50.487097   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:50.725146   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:50.909497   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:50.927949   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:50.987146   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:51.205046   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:51.407721   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:51.427443   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:51.486381   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:51.703724   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:51.908408   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:51.926940   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:52.008709   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:52.204119   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:52.408284   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:52.427383   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:52.487589   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:52.704708   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:52.908338   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:52.926968   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:52.986500   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:53.203447   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:53.407640   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:53.427468   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:53.486462   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:53.703973   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:53.909196   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:53.927461   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:53.986619   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:54.204635   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:54.410262   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:54.429174   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:54.487147   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:54.705215   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:54.908014   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:55.024754   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:55.024767   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:55.257505   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:55.408151   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:55.427615   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:55.487345   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:55.704893   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:55.908638   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:55.927490   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:55.986143   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:56.204842   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:56.409203   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:56.427665   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:56.486972   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:56.704655   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:56.908422   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:57.009059   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:57.009222   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:57.205352   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:57.408608   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:57.428265   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:57.487029   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:57.704732   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:57.908394   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:57.929414   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:57.987395   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:58.203793   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:58.409060   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:58.427201   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:58.486353   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1119 21:48:58.703534   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:58.907973   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:58.926903   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:58.986513   14179 kapi.go:107] duration metric: took 1m6.502613345s to wait for kubernetes.io/minikube-addons=registry ...
	I1119 21:48:59.204595   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:59.410221   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:59.428457   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:48:59.704801   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:48:59.908657   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:48:59.976877   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:00.204380   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:00.408230   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:00.427500   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:00.704339   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:00.908035   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:00.927195   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:01.204945   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:01.411762   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:01.428575   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:01.704195   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:01.908544   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:01.927221   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:02.207063   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:02.408897   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:02.426861   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:02.704220   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:02.909230   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:02.927592   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:03.204863   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:03.465928   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:03.466241   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:03.703927   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:03.908630   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:03.927472   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:04.204193   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:04.408426   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:04.427965   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:04.704424   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:04.907719   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:04.927148   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:05.204388   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:05.409248   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1119 21:49:05.428498   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:05.705294   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:05.909029   14179 kapi.go:107] duration metric: took 1m12.50379732s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1119 21:49:05.927519   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:06.203794   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:06.427235   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:06.754169   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:06.928217   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:07.204399   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:07.428499   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:07.704573   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:07.928305   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:08.204158   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:08.427578   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:08.704300   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:08.928307   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:09.204516   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:09.427463   14179 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1119 21:49:09.704472   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:09.930488   14179 kapi.go:107] duration metric: took 1m17.005797471s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1119 21:49:10.203731   14179 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1119 21:49:10.704302   14179 kapi.go:107] duration metric: took 1m11.002988s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1119 21:49:10.705337   14179 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-418049 cluster.
	I1119 21:49:10.706242   14179 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1119 21:49:10.707072   14179 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1119 21:49:10.708119   14179 out.go:179] * Enabled addons: ingress-dns, metrics-server, amd-gpu-device-plugin, cloud-spanner, nvidia-device-plugin, inspektor-gadget, storage-provisioner, registry-creds, yakd, default-storageclass, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1119 21:49:10.709029   14179 addons.go:515] duration metric: took 1m19.274995497s for enable addons: enabled=[ingress-dns metrics-server amd-gpu-device-plugin cloud-spanner nvidia-device-plugin inspektor-gadget storage-provisioner registry-creds yakd default-storageclass storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1119 21:49:10.709065   14179 start.go:247] waiting for cluster config update ...
	I1119 21:49:10.709089   14179 start.go:256] writing updated cluster config ...
	I1119 21:49:10.709316   14179 ssh_runner.go:195] Run: rm -f paused
	I1119 21:49:10.712972   14179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:49:10.715182   14179 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7v6rp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.718499   14179 pod_ready.go:94] pod "coredns-66bc5c9577-7v6rp" is "Ready"
	I1119 21:49:10.718517   14179 pod_ready.go:86] duration metric: took 3.316776ms for pod "coredns-66bc5c9577-7v6rp" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.720007   14179 pod_ready.go:83] waiting for pod "etcd-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.723044   14179 pod_ready.go:94] pod "etcd-addons-418049" is "Ready"
	I1119 21:49:10.723063   14179 pod_ready.go:86] duration metric: took 3.034414ms for pod "etcd-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.724616   14179 pod_ready.go:83] waiting for pod "kube-apiserver-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.727579   14179 pod_ready.go:94] pod "kube-apiserver-addons-418049" is "Ready"
	I1119 21:49:10.727595   14179 pod_ready.go:86] duration metric: took 2.963666ms for pod "kube-apiserver-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:10.728994   14179 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:11.116753   14179 pod_ready.go:94] pod "kube-controller-manager-addons-418049" is "Ready"
	I1119 21:49:11.116778   14179 pod_ready.go:86] duration metric: took 387.766983ms for pod "kube-controller-manager-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:11.316532   14179 pod_ready.go:83] waiting for pod "kube-proxy-8rrhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:11.716236   14179 pod_ready.go:94] pod "kube-proxy-8rrhm" is "Ready"
	I1119 21:49:11.716260   14179 pod_ready.go:86] duration metric: took 399.707199ms for pod "kube-proxy-8rrhm" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:11.916989   14179 pod_ready.go:83] waiting for pod "kube-scheduler-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:12.315597   14179 pod_ready.go:94] pod "kube-scheduler-addons-418049" is "Ready"
	I1119 21:49:12.315619   14179 pod_ready.go:86] duration metric: took 398.608571ms for pod "kube-scheduler-addons-418049" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 21:49:12.315630   14179 pod_ready.go:40] duration metric: took 1.602635151s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 21:49:12.360149   14179 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 21:49:12.361555   14179 out.go:179] * Done! kubectl is now configured to use "addons-418049" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 21:49:31 addons-418049 crio[774]: time="2025-11-19T21:49:31.79706292Z" level=info msg="Started container" PID=6866 containerID=e497da9b4c9937e5bf0bae16b4ff2c17404d8653244045716e4a594e9189c259 description=default/registry-test/registry-test id=c6d409c2-b270-4b15-958e-bcc622650cae name=/runtime.v1.RuntimeService/StartContainer sandboxID=a81e26530e0c293279730f3180426efed964c37d9ab6caac5f4a7b5cb0e6bf9f
	Nov 19 21:49:33 addons-418049 crio[774]: time="2025-11-19T21:49:33.688394121Z" level=info msg="Stopping pod sandbox: a81e26530e0c293279730f3180426efed964c37d9ab6caac5f4a7b5cb0e6bf9f" id=1f0a143a-4160-406f-88b1-68e870e85c54 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:49:33 addons-418049 crio[774]: time="2025-11-19T21:49:33.68870808Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:a81e26530e0c293279730f3180426efed964c37d9ab6caac5f4a7b5cb0e6bf9f UID:347e277b-9a32-4857-a8a3-534666b2fa6c NetNS:/var/run/netns/23aab65c-0ef7-4e29-8bd1-11d5ddcf1fb4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d9c190}] Aliases:map[]}"
	Nov 19 21:49:33 addons-418049 crio[774]: time="2025-11-19T21:49:33.688897983Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Nov 19 21:49:33 addons-418049 crio[774]: time="2025-11-19T21:49:33.712751168Z" level=info msg="Stopped pod sandbox: a81e26530e0c293279730f3180426efed964c37d9ab6caac5f4a7b5cb0e6bf9f" id=1f0a143a-4160-406f-88b1-68e870e85c54 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.955960368Z" level=info msg="Running pod sandbox: default/task-pv-pod/POD" id=2e3ceb9a-843d-47ef-8fa9-15bf1a925a68 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.956030841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.962106977Z" level=info msg="Got pod network &{Name:task-pv-pod Namespace:default ID:e48a48231ff2ae027c590d9ebce480c88ecd3f50560980bd3fd170606157363a UID:8f850572-f75c-4891-8dd2-7e45898053b9 NetNS:/var/run/netns/a12b36a8-f1bc-4882-9863-4ebf1d953ee4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ada8}] Aliases:map[]}"
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.962137975Z" level=info msg="Adding pod default_task-pv-pod to CNI network \"kindnet\" (type=ptp)"
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.978348914Z" level=info msg="Got pod network &{Name:task-pv-pod Namespace:default ID:e48a48231ff2ae027c590d9ebce480c88ecd3f50560980bd3fd170606157363a UID:8f850572-f75c-4891-8dd2-7e45898053b9 NetNS:/var/run/netns/a12b36a8-f1bc-4882-9863-4ebf1d953ee4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008ada8}] Aliases:map[]}"
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.978497132Z" level=info msg="Checking pod default_task-pv-pod for CNI network kindnet (type=ptp)"
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.979424233Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.980526132Z" level=info msg="Ran pod sandbox e48a48231ff2ae027c590d9ebce480c88ecd3f50560980bd3fd170606157363a with infra container: default/task-pv-pod/POD" id=2e3ceb9a-843d-47ef-8fa9-15bf1a925a68 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.981765841Z" level=info msg="Pulling image: docker.io/nginx:latest" id=f1308ed6-0efe-497a-b6dd-4e55d7ec1b0b name=/runtime.v1.ImageService/PullImage
	Nov 19 21:49:34 addons-418049 crio[774]: time="2025-11-19T21:49:34.983763065Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.544240854Z" level=info msg="Pulled image: docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541" id=f1308ed6-0efe-497a-b6dd-4e55d7ec1b0b name=/runtime.v1.ImageService/PullImage
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.544854496Z" level=info msg="Checking image status: docker.io/nginx:latest" id=9ca3358c-b822-4da9-bc28-fc7d11153894 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.547060024Z" level=info msg="Checking image status: docker.io/nginx" id=2b68458f-1136-4dca-a1da-89abdd435332 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.551519233Z" level=info msg="Creating container: default/task-pv-pod/task-pv-container" id=e55e2702-d854-42f2-9b91-f1226a1f64d2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.551645306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.557701852Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.558345108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.585292382Z" level=info msg="Created container b537d76efe7049c7ac60fd33a0c1952ab796f65767b2d4fc03bf69cdbb57dea0: default/task-pv-pod/task-pv-container" id=e55e2702-d854-42f2-9b91-f1226a1f64d2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.585805719Z" level=info msg="Starting container: b537d76efe7049c7ac60fd33a0c1952ab796f65767b2d4fc03bf69cdbb57dea0" id=b9f8183a-893d-4a09-904f-695088914738 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 21:49:38 addons-418049 crio[774]: time="2025-11-19T21:49:38.587634696Z" level=info msg="Started container" PID=7253 containerID=b537d76efe7049c7ac60fd33a0c1952ab796f65767b2d4fc03bf69cdbb57dea0 description=default/task-pv-pod/task-pv-container id=b9f8183a-893d-4a09-904f-695088914738 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e48a48231ff2ae027c590d9ebce480c88ecd3f50560980bd3fd170606157363a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	b537d76efe704       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                                              Less than a second ago   Running             task-pv-container                        0                   e48a48231ff2a       task-pv-pod                                                  default
	e497da9b4c993       gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee                                          7 seconds ago            Exited              registry-test                            0                   a81e26530e0c2       registry-test                                                default
	db8a80c952d2f       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                                             8 seconds ago            Exited              helper-pod                               0                   0e831f95fd525       helper-pod-delete-pvc-507d12fa-be38-43d5-a275-67581d2b4b4d   local-path-storage
	0e13e1d0bec93       docker.io/library/busybox@sha256:00baf5736376036ea4bc1a1c075784fc98a79186604d5d41305cd9b428b3b737                                            12 seconds ago           Exited              busybox                                  0                   17bdab404a86e       test-local-path                                              default
	f327476b670cf       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            17 seconds ago           Exited              helper-pod                               0                   396ff8e1d31b2       helper-pod-create-pvc-507d12fa-be38-43d5-a275-67581d2b4b4d   local-path-storage
	e413d32a250e7       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          25 seconds ago           Running             busybox                                  0                   7d68704ff1904       busybox                                                      default
	1c15072ba6f0d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 29 seconds ago           Running             gcp-auth                                 0                   abb24fffdfd8f       gcp-auth-78565c9fb4-9cbs7                                    gcp-auth
	846acb6dc9f45       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             30 seconds ago           Running             controller                               0                   c14b49a91426d       ingress-nginx-controller-6c8bf45fb-jbmr5                     ingress-nginx
	615461f73700f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          34 seconds ago           Running             csi-snapshotter                          0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                                     kube-system
	640ee0941acbe       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          35 seconds ago           Running             csi-provisioner                          0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                                     kube-system
	aa6d22c4422b5       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            36 seconds ago           Running             liveness-probe                           0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                                     kube-system
	c34f9e102966e       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           37 seconds ago           Running             hostpath                                 0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                                     kube-system
	b14bbbfda2b31       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            38 seconds ago           Running             gadget                                   0                   1f2f24c99fd54       gadget-9ww4s                                                 gadget
	4fe0dd3f7607b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                40 seconds ago           Running             node-driver-registrar                    0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                                     kube-system
	49f15b33a19b1       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              41 seconds ago           Running             registry-proxy                           0                   dad66d5865894       registry-proxy-znvmk                                         kube-system
	ee296b974e145       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     42 seconds ago           Running             amd-gpu-device-plugin                    0                   a6f9009efca6b       amd-gpu-device-plugin-2tvsr                                  kube-system
	953cd4622bdea       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     43 seconds ago           Running             nvidia-device-plugin-ctr                 0                   990ae8691ce33       nvidia-device-plugin-daemonset-86rtv                         kube-system
	cabdf495a7872       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              46 seconds ago           Running             csi-resizer                              0                   bc416f7a1b2ac       csi-hostpath-resizer-0                                       kube-system
	5ea38cfc3f2c2       884bd0ac01c8f31e881485be470970faa6043bd20f3c592f832e6e0233b4cf45                                                                             46 seconds ago           Exited              patch                                    1                   a1c94815e9bfb       gcp-auth-certs-patch-ksqjg                                   gcp-auth
	8c2b153c22cf2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   47 seconds ago           Exited              patch                                    0                   e22af24488d53       ingress-nginx-admission-patch-qddgt                          ingress-nginx
	254638acfaa6f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      47 seconds ago           Running             volume-snapshot-controller               0                   73f35a767436d       snapshot-controller-7d9fbc56b8-rcvt9                         kube-system
	6cfbbcb1b99b9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   47 seconds ago           Exited              create                                   0                   5954a9cf920f7       gcp-auth-certs-create-5px2l                                  gcp-auth
	57eb757d2767f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   48 seconds ago           Exited              create                                   0                   c151f3665cad4       ingress-nginx-admission-create-5rv6p                         ingress-nginx
	a788e98cc9d95       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             48 seconds ago           Running             local-path-provisioner                   0                   2a6b288f0591e       local-path-provisioner-648f6765c9-rqhgx                      local-path-storage
	702bace1b1665       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             50 seconds ago           Running             csi-attacher                             0                   9b010c1a2a4de       csi-hostpath-attacher-0                                      kube-system
	36eea9c566cd0       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   51 seconds ago           Running             csi-external-health-monitor-controller   0                   ce1edd6ff7eef       csi-hostpathplugin-2mv8p                                     kube-system
	acd4c407fc320       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      51 seconds ago           Running             volume-snapshot-controller               0                   c67d84aeb9cc8       snapshot-controller-7d9fbc56b8-knb29                         kube-system
	abf2da285e255       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           52 seconds ago           Running             registry                                 0                   6ab5102654906       registry-6b586f9694-7pv4f                                    kube-system
	963e25c80a5c1       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              54 seconds ago           Running             yakd                                     0                   941edd9914950       yakd-dashboard-5ff678cb9-9g826                               yakd-dashboard
	c8c39f318fa44       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               57 seconds ago           Running             cloud-spanner-emulator                   0                   7e18779ef0c6b       cloud-spanner-emulator-6f9fcf858b-tqntd                      default
	a0a23eb827f27       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        59 seconds ago           Running             metrics-server                           0                   ee9590b327fa6       metrics-server-85b7d694d7-ggkmz                              kube-system
	acffddf4a9a12       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago       Running             minikube-ingress-dns                     0                   28935190f110a       kube-ingress-dns-minikube                                    kube-system
	f6a9f035506dd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago       Running             coredns                                  0                   b3d5be5a1107f       coredns-66bc5c9577-7v6rp                                     kube-system
	477c8991360a1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago       Running             storage-provisioner                      0                   66a3b652fe301       storage-provisioner                                          kube-system
	292ee6aa235ef       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago       Running             kube-proxy                               0                   2e743af2fc9e6       kube-proxy-8rrhm                                             kube-system
	a0d1b51e3bef7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago       Running             kindnet-cni                              0                   914f5b455ceef       kindnet-52bj8                                                kube-system
	ffe9f59b44ecc       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago       Running             kube-controller-manager                  0                   e2873697df875       kube-controller-manager-addons-418049                        kube-system
	3365bf7838fc3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago       Running             etcd                                     0                   90942f1365463       etcd-addons-418049                                           kube-system
	63b8d07a3ca42       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago       Running             kube-apiserver                           0                   71cec2a407c85       kube-apiserver-addons-418049                                 kube-system
	639836d21fa22       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago       Running             kube-scheduler                           0                   e29b3fc61b86d       kube-scheduler-addons-418049                                 kube-system
	
	
	==> coredns [f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d] <==
	[INFO] 10.244.0.16:42869 - 60480 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000083516s
	[INFO] 10.244.0.16:40368 - 971 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000048278s
	[INFO] 10.244.0.16:40368 - 761 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000059403s
	[INFO] 10.244.0.16:59938 - 33812 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000031778s
	[INFO] 10.244.0.16:59938 - 33556 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000087935s
	[INFO] 10.244.0.16:58534 - 60318 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090495s
	[INFO] 10.244.0.16:58534 - 60523 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000135748s
	[INFO] 10.244.0.22:44875 - 40264 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192952s
	[INFO] 10.244.0.22:50237 - 57500 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000171547s
	[INFO] 10.244.0.22:56939 - 29307 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130631s
	[INFO] 10.244.0.22:45740 - 33725 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149459s
	[INFO] 10.244.0.22:59218 - 31679 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000116198s
	[INFO] 10.244.0.22:38962 - 4886 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000144964s
	[INFO] 10.244.0.22:52858 - 2926 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.0055634s
	[INFO] 10.244.0.22:55408 - 32051 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.0057261s
	[INFO] 10.244.0.22:55317 - 7691 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007114797s
	[INFO] 10.244.0.22:55227 - 8589 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007653414s
	[INFO] 10.244.0.22:45336 - 945 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00602642s
	[INFO] 10.244.0.22:37719 - 28938 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006626025s
	[INFO] 10.244.0.22:54573 - 41399 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004478191s
	[INFO] 10.244.0.22:44115 - 28835 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005127919s
	[INFO] 10.244.0.22:43646 - 45961 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000756816s
	[INFO] 10.244.0.22:55612 - 2380 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001217887s
	[INFO] 10.244.0.27:33146 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000213781s
	[INFO] 10.244.0.27:60992 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164973s
	
	
	==> describe nodes <==
	Name:               addons-418049
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-418049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=addons-418049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T21_47_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-418049
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-418049"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 21:47:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-418049
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 21:49:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 21:49:17 +0000   Wed, 19 Nov 2025 21:47:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 21:49:17 +0000   Wed, 19 Nov 2025 21:47:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 21:49:17 +0000   Wed, 19 Nov 2025 21:47:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 21:49:17 +0000   Wed, 19 Nov 2025 21:48:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-418049
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                c4731ef1-8c53-401e-85ea-2fbdcc5178dc
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  default                     cloud-spanner-emulator-6f9fcf858b-tqntd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  default                     task-pv-pod                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  gadget                      gadget-9ww4s                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  gcp-auth                    gcp-auth-78565c9fb4-9cbs7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-jbmr5    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         107s
	  kube-system                 amd-gpu-device-plugin-2tvsr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 coredns-66bc5c9577-7v6rp                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 csi-hostpathplugin-2mv8p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 etcd-addons-418049                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-52bj8                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-addons-418049                250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-addons-418049       200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-8rrhm                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-addons-418049                100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 metrics-server-85b7d694d7-ggkmz             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         107s
	  kube-system                 nvidia-device-plugin-daemonset-86rtv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 registry-6b586f9694-7pv4f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 registry-creds-764b6fb674-j5lrp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 registry-proxy-znvmk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 snapshot-controller-7d9fbc56b8-knb29        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 snapshot-controller-7d9fbc56b8-rcvt9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  local-path-storage          local-path-provisioner-648f6765c9-rqhgx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9g826              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 106s  kube-proxy       
	  Normal  Starting                 114s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s  kubelet          Node addons-418049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s  kubelet          Node addons-418049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s  kubelet          Node addons-418049 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           109s  node-controller  Node addons-418049 event: Registered Node addons-418049 in Controller
	  Normal  NodeReady                67s   kubelet          Node addons-418049 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov19 21:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001821] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.083008] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.366353] i8042: Warning: Keylock active
	[  +0.010119] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.481746] block sda: the capability attribute has been deprecated.
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9] <==
	{"level":"warn","ts":"2025-11-19T21:47:42.749935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.755853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.768896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.774132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.779683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.784968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.790371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.795846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.802024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.807542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.819145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.824860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.838968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.841999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.847471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.854900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:42.900697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:53.777394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:47:53.784144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:48:20.276127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:48:20.282129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:48:20.296645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:48:20.303755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T21:48:44.965953Z","caller":"traceutil/trace.go:172","msg":"trace[1611484485] transaction","detail":"{read_only:false; response_revision:982; number_of_response:1; }","duration":"119.176969ms","start":"2025-11-19T21:48:44.846761Z","end":"2025-11-19T21:48:44.965938Z","steps":["trace[1611484485] 'process raft request'  (duration: 54.432746ms)","trace[1611484485] 'compare'  (duration: 64.65383ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T21:49:03.463280Z","caller":"traceutil/trace.go:172","msg":"trace[1687762284] transaction","detail":"{read_only:false; response_revision:1151; number_of_response:1; }","duration":"100.063035ms","start":"2025-11-19T21:49:03.363202Z","end":"2025-11-19T21:49:03.463265Z","steps":["trace[1687762284] 'process raft request'  (duration: 99.904507ms)"],"step_count":1}
	
	
	==> gcp-auth [1c15072ba6f0d836406454de7a4723016c9b47a78d679dfb3effec98451a82c6] <==
	2025/11/19 21:49:09 GCP Auth Webhook started!
	2025/11/19 21:49:12 Ready to marshal response ...
	2025/11/19 21:49:12 Ready to write response ...
	2025/11/19 21:49:12 Ready to marshal response ...
	2025/11/19 21:49:12 Ready to write response ...
	2025/11/19 21:49:12 Ready to marshal response ...
	2025/11/19 21:49:12 Ready to write response ...
	2025/11/19 21:49:20 Ready to marshal response ...
	2025/11/19 21:49:20 Ready to write response ...
	2025/11/19 21:49:20 Ready to marshal response ...
	2025/11/19 21:49:20 Ready to write response ...
	2025/11/19 21:49:30 Ready to marshal response ...
	2025/11/19 21:49:30 Ready to write response ...
	2025/11/19 21:49:30 Ready to marshal response ...
	2025/11/19 21:49:30 Ready to write response ...
	2025/11/19 21:49:34 Ready to marshal response ...
	2025/11/19 21:49:34 Ready to write response ...
	
	
	==> kernel <==
	 21:49:39 up 32 min,  0 user,  load average: 1.86, 0.95, 0.38
	Linux addons-418049 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833] <==
	I1119 21:47:52.299476       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 21:47:52.299710       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 21:47:52.299745       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 21:47:52.300051       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 21:48:22.300397       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 21:48:22.300436       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1119 21:48:22.301126       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 21:48:22.302431       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1119 21:48:23.800298       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 21:48:23.800327       1 metrics.go:72] Registering metrics
	I1119 21:48:23.800415       1 controller.go:711] "Syncing nftables rules"
	I1119 21:48:32.305906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:48:32.305951       1 main.go:301] handling current node
	I1119 21:48:42.299402       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:48:42.299432       1 main.go:301] handling current node
	I1119 21:48:52.298801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:48:52.298838       1 main.go:301] handling current node
	I1119 21:49:02.299543       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:49:02.299588       1 main.go:301] handling current node
	I1119 21:49:12.299936       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:49:12.299963       1 main.go:301] handling current node
	I1119 21:49:22.299591       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:49:22.299618       1 main.go:301] handling current node
	I1119 21:49:32.299169       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:49:32.299202       1 main.go:301] handling current node
	
	
	==> kube-apiserver [63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d] <==
	E1119 21:48:41.542710       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.38.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.38.199:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.101.38.199:443: connect: connection refused" logger="UnhandledError"
	W1119 21:48:41.542865       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:48:41.542928       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1119 21:48:42.544648       1 handler_proxy.go:99] no RequestInfo found in the context
	W1119 21:48:42.544670       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:48:42.544689       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1119 21:48:42.544703       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1119 21:48:42.544720       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1119 21:48:42.545830       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1119 21:48:46.551111       1 handler_proxy.go:99] no RequestInfo found in the context
	E1119 21:48:46.551342       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.101.38.199:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.101.38.199:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	E1119 21:48:46.551371       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1119 21:48:46.553669       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1119 21:49:19.997233       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42356: use of closed network connection
	E1119 21:49:20.136937       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42382: use of closed network connection
	I1119 21:49:39.667301       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12] <==
	I1119 21:47:50.260334       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 21:47:50.260443       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 21:47:50.260569       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 21:47:50.260601       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 21:47:50.260624       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 21:47:50.260764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 21:47:50.260993       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 21:47:50.261716       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 21:47:50.261783       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 21:47:50.262425       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 21:47:50.263753       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 21:47:50.266492       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:47:50.268688       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 21:47:50.269787       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 21:47:50.275043       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 21:47:50.282397       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1119 21:47:52.593732       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1119 21:48:20.270998       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1119 21:48:20.271120       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1119 21:48:20.271159       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1119 21:48:20.289245       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1119 21:48:20.292242       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1119 21:48:20.371769       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:48:20.392958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 21:48:35.217205       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59] <==
	I1119 21:47:51.804321       1 server_linux.go:53] "Using iptables proxy"
	I1119 21:47:52.063197       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 21:47:52.165657       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 21:47:52.168873       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 21:47:52.174880       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 21:47:52.500054       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 21:47:52.500294       1 server_linux.go:132] "Using iptables Proxier"
	I1119 21:47:52.521826       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 21:47:52.530439       1 server.go:527] "Version info" version="v1.34.1"
	I1119 21:47:52.530532       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:47:52.535727       1 config.go:200] "Starting service config controller"
	I1119 21:47:52.536051       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 21:47:52.536757       1 config.go:106] "Starting endpoint slice config controller"
	I1119 21:47:52.536780       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 21:47:52.536836       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 21:47:52.536843       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 21:47:52.537792       1 config.go:309] "Starting node config controller"
	I1119 21:47:52.537811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 21:47:52.539005       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 21:47:52.637732       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 21:47:52.639648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 21:47:52.639675       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98] <==
	E1119 21:47:43.287291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 21:47:43.287474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 21:47:43.289446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 21:47:43.289450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:47:43.289554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:47:43.289606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:47:43.289609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 21:47:43.289663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 21:47:43.289685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 21:47:43.290525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 21:47:43.290525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:47:43.290586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 21:47:43.290601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:47:43.290688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 21:47:43.290707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:47:43.290765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 21:47:43.290769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 21:47:44.115292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 21:47:44.170384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:47:44.275666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:47:44.336443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:47:44.347289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 21:47:44.358201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 21:47:44.495974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1119 21:47:46.686510       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 21:49:31 addons-418049 kubelet[1280]: I1119 21:49:31.819501    1280 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/611cc572-717e-453b-abfb-a926376eceb5-gcp-creds\") on node \"addons-418049\" DevicePath \"\""
	Nov 19 21:49:31 addons-418049 kubelet[1280]: I1119 21:49:31.819661    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611cc572-717e-453b-abfb-a926376eceb5-script" (OuterVolumeSpecName: "script") pod "611cc572-717e-453b-abfb-a926376eceb5" (UID: "611cc572-717e-453b-abfb-a926376eceb5"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 19 21:49:31 addons-418049 kubelet[1280]: I1119 21:49:31.821296    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611cc572-717e-453b-abfb-a926376eceb5-kube-api-access-rnsfx" (OuterVolumeSpecName: "kube-api-access-rnsfx") pod "611cc572-717e-453b-abfb-a926376eceb5" (UID: "611cc572-717e-453b-abfb-a926376eceb5"). InnerVolumeSpecName "kube-api-access-rnsfx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 19 21:49:31 addons-418049 kubelet[1280]: I1119 21:49:31.920228    1280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rnsfx\" (UniqueName: \"kubernetes.io/projected/611cc572-717e-453b-abfb-a926376eceb5-kube-api-access-rnsfx\") on node \"addons-418049\" DevicePath \"\""
	Nov 19 21:49:31 addons-418049 kubelet[1280]: I1119 21:49:31.920265    1280 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/611cc572-717e-453b-abfb-a926376eceb5-script\") on node \"addons-418049\" DevicePath \"\""
	Nov 19 21:49:32 addons-418049 kubelet[1280]: I1119 21:49:32.684575    1280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e831f95fd525e2b2f5f12c40596c3f934829f9023bbc078df466931bdd08191"
	Nov 19 21:49:32 addons-418049 kubelet[1280]: E1119 21:49:32.691892    1280 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-507d12fa-be38-43d5-a275-67581d2b4b4d\" is forbidden: User \"system:node:addons-418049\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-418049' and this object" podUID="611cc572-717e-453b-abfb-a926376eceb5" pod="local-path-storage/helper-pod-delete-pvc-507d12fa-be38-43d5-a275-67581d2b4b4d"
	Nov 19 21:49:33 addons-418049 kubelet[1280]: I1119 21:49:33.302682    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="611cc572-717e-453b-abfb-a926376eceb5" path="/var/lib/kubelet/pods/611cc572-717e-453b-abfb-a926376eceb5/volumes"
	Nov 19 21:49:33 addons-418049 kubelet[1280]: I1119 21:49:33.733296    1280 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/347e277b-9a32-4857-a8a3-534666b2fa6c-gcp-creds\") pod \"347e277b-9a32-4857-a8a3-534666b2fa6c\" (UID: \"347e277b-9a32-4857-a8a3-534666b2fa6c\") "
	Nov 19 21:49:33 addons-418049 kubelet[1280]: I1119 21:49:33.733362    1280 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5rfz\" (UniqueName: \"kubernetes.io/projected/347e277b-9a32-4857-a8a3-534666b2fa6c-kube-api-access-w5rfz\") pod \"347e277b-9a32-4857-a8a3-534666b2fa6c\" (UID: \"347e277b-9a32-4857-a8a3-534666b2fa6c\") "
	Nov 19 21:49:33 addons-418049 kubelet[1280]: I1119 21:49:33.733417    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/347e277b-9a32-4857-a8a3-534666b2fa6c-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "347e277b-9a32-4857-a8a3-534666b2fa6c" (UID: "347e277b-9a32-4857-a8a3-534666b2fa6c"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 19 21:49:33 addons-418049 kubelet[1280]: I1119 21:49:33.733548    1280 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/347e277b-9a32-4857-a8a3-534666b2fa6c-gcp-creds\") on node \"addons-418049\" DevicePath \"\""
	Nov 19 21:49:33 addons-418049 kubelet[1280]: I1119 21:49:33.736086    1280 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/347e277b-9a32-4857-a8a3-534666b2fa6c-kube-api-access-w5rfz" (OuterVolumeSpecName: "kube-api-access-w5rfz") pod "347e277b-9a32-4857-a8a3-534666b2fa6c" (UID: "347e277b-9a32-4857-a8a3-534666b2fa6c"). InnerVolumeSpecName "kube-api-access-w5rfz". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 19 21:49:33 addons-418049 kubelet[1280]: I1119 21:49:33.834371    1280 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w5rfz\" (UniqueName: \"kubernetes.io/projected/347e277b-9a32-4857-a8a3-534666b2fa6c-kube-api-access-w5rfz\") on node \"addons-418049\" DevicePath \"\""
	Nov 19 21:49:34 addons-418049 kubelet[1280]: I1119 21:49:34.693760    1280 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a81e26530e0c293279730f3180426efed964c37d9ab6caac5f4a7b5cb0e6bf9f"
	Nov 19 21:49:34 addons-418049 kubelet[1280]: E1119 21:49:34.695035    1280 status_manager.go:1018] "Failed to get status for pod" err="pods \"registry-test\" is forbidden: User \"system:node:addons-418049\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-418049' and this object" podUID="347e277b-9a32-4857-a8a3-534666b2fa6c" pod="default/registry-test"
	Nov 19 21:49:34 addons-418049 kubelet[1280]: I1119 21:49:34.740080    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/8f850572-f75c-4891-8dd2-7e45898053b9-gcp-creds\") pod \"task-pv-pod\" (UID: \"8f850572-f75c-4891-8dd2-7e45898053b9\") " pod="default/task-pv-pod"
	Nov 19 21:49:34 addons-418049 kubelet[1280]: I1119 21:49:34.740118    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-78b0284f-44a3-48a7-91ef-9668cdef361f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a2960980-c591-11f0-a486-46ab2516ba23\") pod \"task-pv-pod\" (UID: \"8f850572-f75c-4891-8dd2-7e45898053b9\") " pod="default/task-pv-pod"
	Nov 19 21:49:34 addons-418049 kubelet[1280]: I1119 21:49:34.740146    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjg62\" (UniqueName: \"kubernetes.io/projected/8f850572-f75c-4891-8dd2-7e45898053b9-kube-api-access-gjg62\") pod \"task-pv-pod\" (UID: \"8f850572-f75c-4891-8dd2-7e45898053b9\") " pod="default/task-pv-pod"
	Nov 19 21:49:34 addons-418049 kubelet[1280]: I1119 21:49:34.847115    1280 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-78b0284f-44a3-48a7-91ef-9668cdef361f\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^a2960980-c591-11f0-a486-46ab2516ba23\") pod \"task-pv-pod\" (UID: \"8f850572-f75c-4891-8dd2-7e45898053b9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/hostpath.csi.k8s.io/732b34f6143d91c0876b353d716c32c68597437946bc641def19df35761138fc/globalmount\"" pod="default/task-pv-pod"
	Nov 19 21:49:35 addons-418049 kubelet[1280]: E1119 21:49:35.302118    1280 status_manager.go:1018] "Failed to get status for pod" err="pods \"registry-test\" is forbidden: User \"system:node:addons-418049\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-418049' and this object" podUID="347e277b-9a32-4857-a8a3-534666b2fa6c" pod="default/registry-test"
	Nov 19 21:49:35 addons-418049 kubelet[1280]: I1119 21:49:35.302888    1280 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="347e277b-9a32-4857-a8a3-534666b2fa6c" path="/var/lib/kubelet/pods/347e277b-9a32-4857-a8a3-534666b2fa6c/volumes"
	Nov 19 21:49:36 addons-418049 kubelet[1280]: E1119 21:49:36.654588    1280 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 19 21:49:36 addons-418049 kubelet[1280]: E1119 21:49:36.654686    1280 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eefdd28e-9cfa-4e4a-8c18-ecececdc9c06-gcr-creds podName:eefdd28e-9cfa-4e4a-8c18-ecececdc9c06 nodeName:}" failed. No retries permitted until 2025-11-19 21:50:40.654662916 +0000 UTC m=+175.432640322 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/eefdd28e-9cfa-4e4a-8c18-ecececdc9c06-gcr-creds") pod "registry-creds-764b6fb674-j5lrp" (UID: "eefdd28e-9cfa-4e4a-8c18-ecececdc9c06") : secret "registry-creds-gcr" not found
	Nov 19 21:49:38 addons-418049 kubelet[1280]: I1119 21:49:38.723313    1280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod" podStartSLOduration=1.158302343 podStartE2EDuration="4.723291951s" podCreationTimestamp="2025-11-19 21:49:34 +0000 UTC" firstStartedPulling="2025-11-19 21:49:34.981445575 +0000 UTC m=+109.759422935" lastFinishedPulling="2025-11-19 21:49:38.546435181 +0000 UTC m=+113.324412543" observedRunningTime="2025-11-19 21:49:38.721864196 +0000 UTC m=+113.499841575" watchObservedRunningTime="2025-11-19 21:49:38.723291951 +0000 UTC m=+113.501269330"
	
	
	==> storage-provisioner [477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6] <==
	W1119 21:49:15.519103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:17.521902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:17.524937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:19.527130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:19.530397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:21.533721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:21.539187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:23.541865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:23.545268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:25.547470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:25.551060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:27.553242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:27.556862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:29.560109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:29.564000       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:31.566343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:31.571892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:33.574701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:33.578312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:35.581014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:35.584372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:37.587021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:37.590797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:39.593150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:49:39.597098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-418049 -n addons-418049
helpers_test.go:269: (dbg) Run:  kubectl --context addons-418049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx ingress-nginx-admission-create-5rv6p ingress-nginx-admission-patch-qddgt registry-creds-764b6fb674-j5lrp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-418049 describe pod nginx ingress-nginx-admission-create-5rv6p ingress-nginx-admission-patch-qddgt registry-creds-764b6fb674-j5lrp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-418049 describe pod nginx ingress-nginx-admission-create-5rv6p ingress-nginx-admission-patch-qddgt registry-creds-764b6fb674-j5lrp: exit status 1 (67.214052ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-418049/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 21:49:39 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r62pz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-r62pz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/nginx to addons-418049
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-5rv6p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qddgt" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-j5lrp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-418049 describe pod nginx ingress-nginx-admission-create-5rv6p ingress-nginx-admission-patch-qddgt registry-creds-764b6fb674-j5lrp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable headlamp --alsologtostderr -v=1: exit status 11 (233.88708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:40.481437   25230 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:40.481766   25230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:40.481777   25230 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:40.481781   25230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:40.482016   25230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:40.482243   25230 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:40.482560   25230 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:40.482569   25230 addons.go:607] checking whether the cluster is paused
	I1119 21:49:40.482654   25230 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:40.482667   25230 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:40.483091   25230 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:40.501709   25230 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:40.501770   25230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:40.517663   25230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:40.608985   25230 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:40.609090   25230 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:40.636562   25230 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:40.636580   25230 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:40.636584   25230 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:40.636587   25230 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:40.636590   25230 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:40.636593   25230 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:40.636595   25230 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:40.636598   25230 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:40.636600   25230 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:40.636605   25230 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:40.636607   25230 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:40.636610   25230 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:40.636612   25230 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:40.636614   25230 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:40.636617   25230 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:40.636628   25230 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:40.636631   25230 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:40.636635   25230 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:40.636637   25230 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:40.636640   25230 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:40.636642   25230 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:40.636644   25230 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:40.636647   25230 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:40.636649   25230 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:40.636652   25230 cri.go:89] found id: ""
	I1119 21:49:40.636685   25230 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:40.651361   25230 out.go:203] 
	W1119 21:49:40.652893   25230 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:40.652910   25230 out.go:285] * 
	* 
	W1119 21:49:40.655943   25230 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:40.657233   25230 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.52s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-tqntd" [8dfc44a4-e31b-4df7-9526-de8ffa8f49e4] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003117597s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (229.502308ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:35.526991   23791 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:35.527121   23791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:35.527129   23791 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:35.527133   23791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:35.527319   23791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:35.527543   23791 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:35.527858   23791 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:35.527877   23791 addons.go:607] checking whether the cluster is paused
	I1119 21:49:35.527963   23791 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:35.527974   23791 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:35.528342   23791 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:35.545627   23791 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:35.545685   23791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:35.562529   23791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:35.654343   23791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:35.654439   23791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:35.681971   23791 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:35.681996   23791 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:35.682002   23791 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:35.682007   23791 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:35.682009   23791 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:35.682021   23791 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:35.682028   23791 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:35.682031   23791 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:35.682033   23791 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:35.682038   23791 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:35.682045   23791 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:35.682055   23791 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:35.682060   23791 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:35.682067   23791 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:35.682071   23791 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:35.682081   23791 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:35.682088   23791 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:35.682094   23791 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:35.682098   23791 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:35.682102   23791 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:35.682113   23791 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:35.682118   23791 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:35.682121   23791 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:35.682123   23791 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:35.682125   23791 cri.go:89] found id: ""
	I1119 21:49:35.682172   23791 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:35.695302   23791 out.go:203] 
	W1119 21:49:35.696430   23791 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:35.696446   23791 out.go:285] * 
	* 
	W1119 21:49:35.699980   23791 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:35.701460   23791 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-418049 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-418049 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-418049 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [f73c1a1f-b068-4849-ab90-c5a97d675211] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [f73c1a1f-b068-4849-ab90-c5a97d675211] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [f73c1a1f-b068-4849-ab90-c5a97d675211] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002268769s
addons_test.go:967: (dbg) Run:  kubectl --context addons-418049 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 ssh "cat /opt/local-path-provisioner/pvc-507d12fa-be38-43d5-a275-67581d2b4b4d_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-418049 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-418049 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (246.328073ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:30.275127   23239 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:30.275444   23239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:30.275454   23239 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:30.275459   23239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:30.275716   23239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:30.276055   23239 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:30.276410   23239 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:30.276427   23239 addons.go:607] checking whether the cluster is paused
	I1119 21:49:30.276541   23239 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:30.276557   23239 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:30.277072   23239 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:30.294491   23239 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:30.294544   23239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:30.311871   23239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:30.404136   23239 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:30.404243   23239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:30.434253   23239 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:30.434282   23239 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:30.434286   23239 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:30.434288   23239 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:30.434291   23239 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:30.434295   23239 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:30.434297   23239 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:30.434299   23239 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:30.434302   23239 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:30.434311   23239 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:30.434322   23239 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:30.434329   23239 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:30.434333   23239 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:30.434336   23239 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:30.434340   23239 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:30.434361   23239 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:30.434370   23239 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:30.434377   23239 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:30.434381   23239 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:30.434384   23239 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:30.434391   23239 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:30.434395   23239 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:30.434402   23239 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:30.434406   23239 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:30.434412   23239 cri.go:89] found id: ""
	I1119 21:49:30.434472   23239 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:30.450201   23239 out.go:203] 
	W1119 21:49:30.455279   23239 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:30.455303   23239 out.go:285] * 
	* 
	W1119 21:49:30.460198   23239 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:30.462055   23239 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-86rtv" [a0be7cf6-3cc7-4cee-aea7-f7413045caad] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003230176s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (244.818981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:31.666870   23401 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:31.667165   23401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:31.667175   23401 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:31.667181   23401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:31.667389   23401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:31.667657   23401 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:31.668131   23401 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:31.668155   23401 addons.go:607] checking whether the cluster is paused
	I1119 21:49:31.668283   23401 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:31.668308   23401 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:31.668682   23401 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:31.689128   23401 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:31.689194   23401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:31.708940   23401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:31.803565   23401 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:31.803655   23401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:31.832212   23401 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:31.832232   23401 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:31.832236   23401 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:31.832239   23401 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:31.832242   23401 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:31.832245   23401 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:31.832247   23401 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:31.832250   23401 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:31.832252   23401 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:31.832257   23401 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:31.832259   23401 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:31.832262   23401 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:31.832264   23401 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:31.832266   23401 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:31.832269   23401 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:31.832274   23401 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:31.832276   23401 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:31.832280   23401 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:31.832283   23401 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:31.832285   23401 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:31.832288   23401 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:31.832290   23401 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:31.832293   23401 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:31.832295   23401 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:31.832298   23401 cri.go:89] found id: ""
	I1119 21:49:31.832334   23401 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:31.844973   23401 out.go:203] 
	W1119 21:49:31.846184   23401 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:31.846207   23401 out.go:285] * 
	* 
	W1119 21:49:31.849490   23401 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:31.850613   23401 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
I1119 21:49:20.370697   12829 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9g826" [297a856a-4752-4dde-8ace-1be498f0ca9b] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003732828s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable yakd --alsologtostderr -v=1: exit status 11 (231.322494ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:25.428797   22853 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:25.429125   22853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:25.429136   22853 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:25.429141   22853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:25.429321   22853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:25.429569   22853 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:25.429861   22853 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:25.429873   22853 addons.go:607] checking whether the cluster is paused
	I1119 21:49:25.429954   22853 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:25.429965   22853 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:25.430269   22853 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:25.447531   22853 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:25.447579   22853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:25.463249   22853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:25.554826   22853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:25.554889   22853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:25.581686   22853 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:25.581715   22853 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:25.581721   22853 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:25.581725   22853 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:25.581727   22853 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:25.581731   22853 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:25.581733   22853 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:25.581736   22853 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:25.581738   22853 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:25.581744   22853 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:25.581756   22853 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:25.581764   22853 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:25.581767   22853 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:25.581770   22853 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:25.581773   22853 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:25.581778   22853 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:25.581783   22853 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:25.581787   22853 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:25.581790   22853 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:25.581793   22853 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:25.581795   22853 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:25.581798   22853 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:25.581800   22853 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:25.581803   22853 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:25.581805   22853 cri.go:89] found id: ""
	I1119 21:49:25.581864   22853 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:25.594733   22853 out.go:203] 
	W1119 21:49:25.595971   22853 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:25.595992   22853 out.go:285] * 
	* 
	W1119 21:49:25.599279   22853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:25.600450   22853 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-2tvsr" [2423e06d-a3f8-4cdf-9d51-7007aac8105b] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.002796062s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-418049 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-418049 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (260.301691ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:49:41.766949   25507 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:49:41.767220   25507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:41.767230   25507 out.go:374] Setting ErrFile to fd 2...
	I1119 21:49:41.767234   25507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:49:41.767397   25507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:49:41.767660   25507 mustload.go:66] Loading cluster: addons-418049
	I1119 21:49:41.768000   25507 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:41.768016   25507 addons.go:607] checking whether the cluster is paused
	I1119 21:49:41.768099   25507 config.go:182] Loaded profile config "addons-418049": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:49:41.768112   25507 host.go:66] Checking if "addons-418049" exists ...
	I1119 21:49:41.768469   25507 cli_runner.go:164] Run: docker container inspect addons-418049 --format={{.State.Status}}
	I1119 21:49:41.787166   25507 ssh_runner.go:195] Run: systemctl --version
	I1119 21:49:41.787217   25507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-418049
	I1119 21:49:41.806250   25507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/addons-418049/id_rsa Username:docker}
	I1119 21:49:41.906296   25507 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 21:49:41.906377   25507 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 21:49:41.940651   25507 cri.go:89] found id: "615461f73700f8667d6d6f0fce455c535c9de92df0becd3880ca384b2401c5cb"
	I1119 21:49:41.940674   25507 cri.go:89] found id: "640ee0941acbeca8b9935fd77d055d6c756259d374daf1ece795064b1b557fce"
	I1119 21:49:41.940681   25507 cri.go:89] found id: "aa6d22c4422b5d1e9dd3c77428814ef61a5d45f006bf9d16058f64f1a2e7a03d"
	I1119 21:49:41.940687   25507 cri.go:89] found id: "c34f9e102966e2e297795b27725786a3803842de4445ec130e038a156c2e9888"
	I1119 21:49:41.940691   25507 cri.go:89] found id: "4fe0dd3f7607b270a1edfb490055f39216404f33d058309295e74a8a8ad3abb3"
	I1119 21:49:41.940696   25507 cri.go:89] found id: "49f15b33a19b1fcdb4d25da30b49d94353c9fabb873f60985731879b5c811ce4"
	I1119 21:49:41.940700   25507 cri.go:89] found id: "ee296b974e145502ae8db893f02635a057f6d3d80aa893e2418209f128107ac2"
	I1119 21:49:41.940705   25507 cri.go:89] found id: "953cd4622bdea94842a88f1b7e66eae394eaa3ffed1df3b3ef9311049ba5f88a"
	I1119 21:49:41.940709   25507 cri.go:89] found id: "cabdf495a78728bd890fb8a650222e7a573a978a53d700ee25c313405985a54c"
	I1119 21:49:41.940716   25507 cri.go:89] found id: "254638acfaa6f2177c80ad529487362fcb5693da4b81bef98411f1255e3c0d45"
	I1119 21:49:41.940720   25507 cri.go:89] found id: "702bace1b1665a691c7f38a9e4e44b746361403cb616876a5e468518c109313c"
	I1119 21:49:41.940725   25507 cri.go:89] found id: "36eea9c566cd0bf4742dd5259fb273ff4fefbbedf82a7f8788f71d0457b0af4b"
	I1119 21:49:41.940729   25507 cri.go:89] found id: "acd4c407fc320f4f28687069c571d99cf761791a983db709e0244618554708b1"
	I1119 21:49:41.940732   25507 cri.go:89] found id: "abf2da285e255cf6b993513fa90a2c4c5e2dfbcfb76728fc555ccd80f7a020b0"
	I1119 21:49:41.940736   25507 cri.go:89] found id: "a0a23eb827f273f742583f74bd5d627361b31fa260b0ba2b4f7c37106d3a7c14"
	I1119 21:49:41.940750   25507 cri.go:89] found id: "acffddf4a9a12ae6295f3a34cbf69e1fcd4200fe9ddf33b7d90dec46ba68348f"
	I1119 21:49:41.940758   25507 cri.go:89] found id: "f6a9f035506dd83b3e044a0f4ecdcda8385769e3b27094297cfd554c84ae826d"
	I1119 21:49:41.940765   25507 cri.go:89] found id: "477c8991360a12945cdd936738461f47ee893e6dcd23a6a19264d2b464713ef6"
	I1119 21:49:41.940769   25507 cri.go:89] found id: "292ee6aa235efef15b559040b921f52f3277c654c7edb2f08a30cc25e5359c59"
	I1119 21:49:41.940773   25507 cri.go:89] found id: "a0d1b51e3bef75c0ace8aa91a03394c848da7d34f228898d4de32e1d78168833"
	I1119 21:49:41.940777   25507 cri.go:89] found id: "ffe9f59b44ecce765e652e8d7f375c9570ffc4d5b91ccaa51f9a400c2e21be12"
	I1119 21:49:41.940781   25507 cri.go:89] found id: "3365bf7838fc324cc8b88d395f9f3f243f097ce6872b78b7f0ad5c7105434bb9"
	I1119 21:49:41.940785   25507 cri.go:89] found id: "63b8d07a3ca42ae4aa8518f29434a1c9e32c4348408fecbedf1c5b24b3fd6c5d"
	I1119 21:49:41.940789   25507 cri.go:89] found id: "639836d21fa2203ebaa7356071dd9303641791e65acd5e0c988f943340dc5e98"
	I1119 21:49:41.940793   25507 cri.go:89] found id: ""
	I1119 21:49:41.940858   25507 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 21:49:41.957664   25507 out.go:203] 
	W1119 21:49:41.958881   25507 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:49:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 21:49:41.958908   25507 out.go:285] * 
	* 
	W1119 21:49:41.963510   25507 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 21:49:41.964749   25507 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-418049 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-037096 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-037096 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-nvn8h" [939688e2-e79e-4b52-b3ad-1daf22d93d47] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-037096 -n functional-037096
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-19 22:04:57.94195443 +0000 UTC m=+1080.768977571
functional_test.go:1645: (dbg) Run:  kubectl --context functional-037096 describe po hello-node-connect-7d85dfc575-nvn8h -n default
functional_test.go:1645: (dbg) kubectl --context functional-037096 describe po hello-node-connect-7d85dfc575-nvn8h -n default:
Name:             hello-node-connect-7d85dfc575-nvn8h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-037096/192.168.49.2
Start Time:       Wed, 19 Nov 2025 21:54:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8jsnq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8jsnq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nvn8h to functional-037096
Normal   Pulling    7m16s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m16s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m16s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-037096 logs hello-node-connect-7d85dfc575-nvn8h -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-037096 logs hello-node-connect-7d85dfc575-nvn8h -n default: exit status 1 (65.482124ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-nvn8h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-037096 logs hello-node-connect-7d85dfc575-nvn8h -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-037096 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-nvn8h
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-037096/192.168.49.2
Start Time:       Wed, 19 Nov 2025 21:54:57 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8jsnq (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-8jsnq:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nvn8h to functional-037096
Normal   Pulling    7m16s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m16s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m16s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-037096 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-037096 logs -l app=hello-node-connect: exit status 1 (58.246852ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-nvn8h" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-037096 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-037096 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.135.38
IPs:                      10.111.135.38
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32263/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-037096
helpers_test.go:243: (dbg) docker inspect functional-037096:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "065b490447e4cc9e767c4d46b6897fce19e0208720d4b13cdd9f1287bb65ba6d",
	        "Created": "2025-11-19T21:53:15.903890281Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 36774,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T21:53:15.93237651Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/065b490447e4cc9e767c4d46b6897fce19e0208720d4b13cdd9f1287bb65ba6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/065b490447e4cc9e767c4d46b6897fce19e0208720d4b13cdd9f1287bb65ba6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/065b490447e4cc9e767c4d46b6897fce19e0208720d4b13cdd9f1287bb65ba6d/hosts",
	        "LogPath": "/var/lib/docker/containers/065b490447e4cc9e767c4d46b6897fce19e0208720d4b13cdd9f1287bb65ba6d/065b490447e4cc9e767c4d46b6897fce19e0208720d4b13cdd9f1287bb65ba6d-json.log",
	        "Name": "/functional-037096",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-037096:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-037096",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "065b490447e4cc9e767c4d46b6897fce19e0208720d4b13cdd9f1287bb65ba6d",
	                "LowerDir": "/var/lib/docker/overlay2/139575ee648d39e52468cc6838a77298f07810898c98008be2ac3ab643a4686f-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/139575ee648d39e52468cc6838a77298f07810898c98008be2ac3ab643a4686f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/139575ee648d39e52468cc6838a77298f07810898c98008be2ac3ab643a4686f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/139575ee648d39e52468cc6838a77298f07810898c98008be2ac3ab643a4686f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-037096",
	                "Source": "/var/lib/docker/volumes/functional-037096/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-037096",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-037096",
	                "name.minikube.sigs.k8s.io": "functional-037096",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "dbd2efba9381b9342fcde27ac7a8e4a629a60661c2ed182d8c9601cf4523c34d",
	            "SandboxKey": "/var/run/docker/netns/dbd2efba9381",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-037096": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ee6d1beb9fb42551cff17b8586f4223744999facc763e08213b218b73b8ca8a0",
	                    "EndpointID": "0de16e5f172f2b4f2ac6bdcd4dbc4ee2e15f0fd0439f7b997d7f3ebc1926f0ca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "02:18:e1:43:cf:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-037096",
	                        "065b490447e4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-037096 -n functional-037096
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-037096 logs -n 25: (1.163803816s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-037096 ssh -- ls -la /mount-9p                                                                          │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ ssh            │ functional-037096 ssh sudo umount -f /mount-9p                                                                     │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ mount          │ -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount3 --alsologtostderr -v=1 │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ mount          │ -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount2 --alsologtostderr -v=1 │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ ssh            │ functional-037096 ssh findmnt -T /mount1                                                                           │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ mount          │ -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount1 --alsologtostderr -v=1 │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ ssh            │ functional-037096 ssh findmnt -T /mount1                                                                           │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ ssh            │ functional-037096 ssh findmnt -T /mount2                                                                           │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ ssh            │ functional-037096 ssh findmnt -T /mount3                                                                           │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ mount          │ -p functional-037096 --kill=true                                                                                   │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ start          │ -p functional-037096 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ start          │ -p functional-037096 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ start          │ -p functional-037096 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-037096 --alsologtostderr -v=1                                                     │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ ssh            │ functional-037096 ssh sudo cat /etc/test/nested/copy/12829/hosts                                                   │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ update-context │ functional-037096 update-context --alsologtostderr -v=2                                                            │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ update-context │ functional-037096 update-context --alsologtostderr -v=2                                                            │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ update-context │ functional-037096 update-context --alsologtostderr -v=2                                                            │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ image          │ functional-037096 image ls --format short --alsologtostderr                                                        │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ ssh            │ functional-037096 ssh pgrep buildkitd                                                                              │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │                     │
	│ image          │ functional-037096 image build -t localhost/my-image:functional-037096 testdata/build --alsologtostderr             │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ image          │ functional-037096 image ls                                                                                         │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ image          │ functional-037096 image ls --format yaml --alsologtostderr                                                         │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ image          │ functional-037096 image ls --format json --alsologtostderr                                                         │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	│ image          │ functional-037096 image ls --format table --alsologtostderr                                                        │ functional-037096 │ jenkins │ v1.37.0 │ 19 Nov 25 21:55 UTC │ 19 Nov 25 21:55 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:55:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:55:17.242625   51604 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:55:17.242715   51604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:17.242724   51604 out.go:374] Setting ErrFile to fd 2...
	I1119 21:55:17.242728   51604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:17.242915   51604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:55:17.243360   51604 out.go:368] Setting JSON to false
	I1119 21:55:17.244254   51604 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2265,"bootTime":1763587052,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:55:17.244375   51604 start.go:143] virtualization: kvm guest
	I1119 21:55:17.245975   51604 out.go:179] * [functional-037096] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:55:17.247063   51604 notify.go:221] Checking for updates...
	I1119 21:55:17.247082   51604 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:55:17.248196   51604 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:55:17.249400   51604 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:55:17.250476   51604 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 21:55:17.251556   51604 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:55:17.252590   51604 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:55:17.253986   51604 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:17.254419   51604 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:55:17.279475   51604 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:55:17.279543   51604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:55:17.335734   51604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 21:55:17.326484982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:55:17.335864   51604 docker.go:319] overlay module found
	I1119 21:55:17.337264   51604 out.go:179] * Using the docker driver based on existing profile
	I1119 21:55:17.338539   51604 start.go:309] selected driver: docker
	I1119 21:55:17.338549   51604 start.go:930] validating driver "docker" against &{Name:functional-037096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-037096 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:55:17.338628   51604 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:55:17.338694   51604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:55:17.391535   51604 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 21:55:17.382509276 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:55:17.392156   51604 cni.go:84] Creating CNI manager for ""
	I1119 21:55:17.392222   51604 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:55:17.392271   51604 start.go:353] cluster config:
	{Name:functional-037096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-037096 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:55:17.393772   51604 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 19 21:55:30 functional-037096 crio[3627]: time="2025-11-19T21:55:30.05525004Z" level=info msg="Starting container: 23a21109cadd0400bd21cf9b6b6c9de7219e994789afd9dc2b9b5243f7c4989f" id=31910b6e-9396-44fa-8a38-064dfa318427 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 21:55:30 functional-037096 crio[3627]: time="2025-11-19T21:55:30.056940586Z" level=info msg="Started container" PID=7614 containerID=23a21109cadd0400bd21cf9b6b6c9de7219e994789afd9dc2b9b5243f7c4989f description=default/mysql-5bb876957f-bc8bj/mysql id=31910b6e-9396-44fa-8a38-064dfa318427 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c6b8a7efa09e99ec59289994ae223029514adda68cc18d4f3688d151425e47a
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.408263031Z" level=info msg="Stopping pod sandbox: 1fd8950828fe16114b816731268ff925b203531227436bf0a0ef7a8d7244d28b" id=dc2538c8-3ae3-423f-b2b0-568c99b1b443 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.408329989Z" level=info msg="Stopped pod sandbox (already stopped): 1fd8950828fe16114b816731268ff925b203531227436bf0a0ef7a8d7244d28b" id=dc2538c8-3ae3-423f-b2b0-568c99b1b443 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.408755077Z" level=info msg="Removing pod sandbox: 1fd8950828fe16114b816731268ff925b203531227436bf0a0ef7a8d7244d28b" id=379a4d8f-47f0-42d1-92e5-cff3e8435646 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.412404417Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.412475153Z" level=info msg="Removed pod sandbox: 1fd8950828fe16114b816731268ff925b203531227436bf0a0ef7a8d7244d28b" id=379a4d8f-47f0-42d1-92e5-cff3e8435646 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.412911167Z" level=info msg="Stopping pod sandbox: 59802cd24ccdfdefb89d7a5c609554193a8906b12a825ad7bc194e718ae53ab0" id=f312d809-16be-42bb-8dfa-db252f7fdc65 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.412965031Z" level=info msg="Stopped pod sandbox (already stopped): 59802cd24ccdfdefb89d7a5c609554193a8906b12a825ad7bc194e718ae53ab0" id=f312d809-16be-42bb-8dfa-db252f7fdc65 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.413250031Z" level=info msg="Removing pod sandbox: 59802cd24ccdfdefb89d7a5c609554193a8906b12a825ad7bc194e718ae53ab0" id=b6ca1063-2d7f-4432-b1a3-80b52787e6cb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.415730596Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.415786628Z" level=info msg="Removed pod sandbox: 59802cd24ccdfdefb89d7a5c609554193a8906b12a825ad7bc194e718ae53ab0" id=b6ca1063-2d7f-4432-b1a3-80b52787e6cb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.416153758Z" level=info msg="Stopping pod sandbox: 757133dba7ef4341bcbcb623cfd85bc25f6055aa9d49599f92e127bf85f1bf37" id=0000a52b-95f5-4bd8-b59b-ce275dd9b7f2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.416199618Z" level=info msg="Stopped pod sandbox (already stopped): 757133dba7ef4341bcbcb623cfd85bc25f6055aa9d49599f92e127bf85f1bf37" id=0000a52b-95f5-4bd8-b59b-ce275dd9b7f2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.416490815Z" level=info msg="Removing pod sandbox: 757133dba7ef4341bcbcb623cfd85bc25f6055aa9d49599f92e127bf85f1bf37" id=06ab50c6-7e23-4fdc-8b97-72c96523f164 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.418494421Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 21:55:31 functional-037096 crio[3627]: time="2025-11-19T21:55:31.418544707Z" level=info msg="Removed pod sandbox: 757133dba7ef4341bcbcb623cfd85bc25f6055aa9d49599f92e127bf85f1bf37" id=06ab50c6-7e23-4fdc-8b97-72c96523f164 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 19 21:55:37 functional-037096 crio[3627]: time="2025-11-19T21:55:37.418919317Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=65deec46-11bc-4617-86c8-8408ec58dc73 name=/runtime.v1.ImageService/PullImage
	Nov 19 21:55:37 functional-037096 crio[3627]: time="2025-11-19T21:55:37.41974161Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c6a6bab5-7cb1-4779-9f98-23db3d1b2abc name=/runtime.v1.ImageService/PullImage
	Nov 19 21:56:18 functional-037096 crio[3627]: time="2025-11-19T21:56:18.419255414Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ae0ca36b-77f1-4fcf-9d26-9bd219c69ca9 name=/runtime.v1.ImageService/PullImage
	Nov 19 21:56:19 functional-037096 crio[3627]: time="2025-11-19T21:56:19.419478864Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a247cb48-2b8a-450f-aa6a-cf022ebc612d name=/runtime.v1.ImageService/PullImage
	Nov 19 21:57:42 functional-037096 crio[3627]: time="2025-11-19T21:57:42.418978453Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0f110a5e-e5db-40cd-9277-ec2951ebff50 name=/runtime.v1.ImageService/PullImage
	Nov 19 21:57:54 functional-037096 crio[3627]: time="2025-11-19T21:57:54.419374038Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=eb4cee74-bbb1-468b-9437-cab3d259e366 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:00:34 functional-037096 crio[3627]: time="2025-11-19T22:00:34.418953987Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a2d53179-a050-47fa-92d7-c4fc5540c758 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:00:35 functional-037096 crio[3627]: time="2025-11-19T22:00:35.419552381Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c0ae2c55-8124-4c67-b1e0-0409e990959e name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	23a21109cadd0       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   6c6b8a7efa09e       mysql-5bb876957f-bc8bj                       default
	203356beeba0b       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   b8e4ceb794cff       kubernetes-dashboard-855c9754f9-bzw6c        kubernetes-dashboard
	8cc6ce59bb839       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   ffe7b934f5060       dashboard-metrics-scraper-77bf4d6c4c-cphvk   kubernetes-dashboard
	d334dd4d0a729       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   d9faeab20113d       sp-pod                                       default
	f3c44af382607       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   e3902dd698595       busybox-mount                                default
	c246ec2e10c2b       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  9 minutes ago       Running             nginx                       0                   f0d87a90bced5       nginx-svc                                    default
	0d66cfee8089f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   6153357d3699f       kube-apiserver-functional-037096             kube-system
	5c68c7fa3b662       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   fa6d14b728f2f       kube-controller-manager-functional-037096    kube-system
	ed9b1fca28478       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   5485d3155ce0e       kube-scheduler-functional-037096             kube-system
	570b04be69f44       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   8b07f5df5348a       etcd-functional-037096                       kube-system
	3484d2612d5ad       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   2decf50281480       kube-proxy-2tjxq                             kube-system
	07c3a0b291dca       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   13f8c60725bff       kindnet-nvqlz                                kube-system
	73f4b117336ce       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   fa6d14b728f2f       kube-controller-manager-functional-037096    kube-system
	ef9a94dfbeb57       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   f5b60d2fc0bcc       coredns-66bc5c9577-fhmjj                     kube-system
	552d9eb43d629       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   32b7fa955159a       storage-provisioner                          kube-system
	ebd8bb646b27c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   f5b60d2fc0bcc       coredns-66bc5c9577-fhmjj                     kube-system
	cb6505a584f73       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   32b7fa955159a       storage-provisioner                          kube-system
	100f834abc306       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   13f8c60725bff       kindnet-nvqlz                                kube-system
	9ac9e1f76d840       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   2decf50281480       kube-proxy-2tjxq                             kube-system
	e3aaf47b5a2cb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   8b07f5df5348a       etcd-functional-037096                       kube-system
	e608774c0364d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   5485d3155ce0e       kube-scheduler-functional-037096             kube-system
	
	
	==> coredns [ebd8bb646b27cfe5b7a7dde629795644116d867208f64181d13771ee2a0b7cb0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55771 - 19751 "HINFO IN 7894781669855304458.2605576950755797175. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.099077564s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [ef9a94dfbeb5722dfb99cf43beb94704cbbb89e3e2276c820880911bc48d623d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47996 - 44561 "HINFO IN 2764909744321271950.1560506240262339421. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.090725343s
	
	
	==> describe nodes <==
	Name:               functional-037096
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-037096
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=functional-037096
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T21_53_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 21:53:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-037096
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:04:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:04:55 +0000   Wed, 19 Nov 2025 21:53:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:04:55 +0000   Wed, 19 Nov 2025 21:53:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:04:55 +0000   Wed, 19 Nov 2025 21:53:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:04:55 +0000   Wed, 19 Nov 2025 21:53:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-037096
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                919522eb-35c4-4771-bf3e-7f53a7c904d0
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-ph2np                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  default                     hello-node-connect-7d85dfc575-nvn8h           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-bc8bj                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m38s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-fhmjj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-037096                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-nvqlz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-037096              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-037096     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-2tjxq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-037096              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-cphvk    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-bzw6c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-037096 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-037096 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-037096 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-037096 event: Registered Node functional-037096 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-037096 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-037096 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-037096 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-037096 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-037096 event: Registered Node functional-037096 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [570b04be69f443bce5a9a5bfe3807e5c65d97e7c699d5f0164180ad34ca3b980] <==
	{"level":"warn","ts":"2025-11-19T21:54:32.671138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.678092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.684807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.692537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.699648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.706145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.712381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.718507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.726648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.732727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.739125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.745225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.751552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.763493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.769400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.774978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.781292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.787092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.792856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.803409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.809710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:54:32.815854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42652","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:04:32.388027Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1135}
	{"level":"info","ts":"2025-11-19T22:04:32.407421Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1135,"took":"19.06311ms","hash":95522773,"current-db-size-bytes":3633152,"current-db-size":"3.6 MB","current-db-size-in-use-bytes":1667072,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-11-19T22:04:32.407456Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":95522773,"revision":1135,"compact-revision":-1}
	
	
	==> etcd [e3aaf47b5a2cb81958b7fd4801b474261e38fbfa6be63d4343c7d56452f90751] <==
	{"level":"warn","ts":"2025-11-19T21:53:29.617340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:53:29.623210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:53:29.629661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:53:29.636501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:53:29.661382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:53:29.667160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T21:53:29.716156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T21:54:12.693974Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-19T21:54:12.694096Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-037096","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-19T21:54:12.694179Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T21:54:19.695281Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-19T21:54:19.695449Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T21:54:19.695484Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-19T21:54:19.695617Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-19T21:54:19.695636Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-19T21:54:19.696062Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T21:54:19.696134Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T21:54:19.696148Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-19T21:54:19.696588Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-19T21:54:19.696611Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-19T21:54:19.696620Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T21:54:19.697853Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-19T21:54:19.697910Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-19T21:54:19.697954Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-19T21:54:19.697966Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-037096","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 22:04:59 up 47 min,  0 user,  load average: 0.17, 0.19, 0.28
	Linux functional-037096 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [07c3a0b291dca5e67b6af0c6e1c8369e7262ce2d057b7639171b8ed283ecedce] <==
	I1119 22:02:53.819801       1 main.go:301] handling current node
	I1119 22:03:03.820520       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:03:03.820547       1 main.go:301] handling current node
	I1119 22:03:13.819405       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:03:13.819440       1 main.go:301] handling current node
	I1119 22:03:23.828192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:03:23.828225       1 main.go:301] handling current node
	I1119 22:03:33.828142       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:03:33.828171       1 main.go:301] handling current node
	I1119 22:03:43.819973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:03:43.820009       1 main.go:301] handling current node
	I1119 22:03:53.820433       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:03:53.820463       1 main.go:301] handling current node
	I1119 22:04:03.824629       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:04:03.824673       1 main.go:301] handling current node
	I1119 22:04:13.820177       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:04:13.820214       1 main.go:301] handling current node
	I1119 22:04:23.822316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:04:23.822347       1 main.go:301] handling current node
	I1119 22:04:33.823899       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:04:33.823936       1 main.go:301] handling current node
	I1119 22:04:43.819501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:04:43.819530       1 main.go:301] handling current node
	I1119 22:04:53.822060       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 22:04:53.822097       1 main.go:301] handling current node
	
	
	==> kindnet [100f834abc306cf20c37f1c7bf611e58650ff5eb133fd0aa3684c97624052890] <==
	I1119 21:53:38.276898       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 21:53:38.277140       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1119 21:53:38.277264       1 main.go:148] setting mtu 1500 for CNI 
	I1119 21:53:38.277281       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 21:53:38.277297       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T21:53:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 21:53:38.571582       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 21:53:38.573299       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 21:53:38.573319       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 21:53:38.574332       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 21:53:38.974108       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 21:53:38.974135       1 metrics.go:72] Registering metrics
	I1119 21:53:38.974192       1 controller.go:711] "Syncing nftables rules"
	I1119 21:53:48.479001       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:53:48.479054       1 main.go:301] handling current node
	I1119 21:53:58.479504       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:53:58.479538       1 main.go:301] handling current node
	I1119 21:54:08.483941       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1119 21:54:08.483969       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d66cfee8089fed0914fa5b49462093520186b94a3ba243e3275bc43106bbea6] <==
	I1119 21:54:33.332919       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 21:54:33.341455       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 21:54:33.442375       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 21:54:34.225541       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1119 21:54:34.429106       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1119 21:54:34.430069       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 21:54:34.435063       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 21:54:34.747432       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 21:54:34.829728       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 21:54:34.871155       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 21:54:34.876644       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 21:54:36.944216       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 21:54:51.855842       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.107.45.102"}
	I1119 21:54:57.632846       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.135.38"}
	I1119 21:54:57.886308       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.104.189.104"}
	I1119 21:55:01.813041       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.105.133.5"}
	E1119 21:55:12.745431       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55868: use of closed network connection
	I1119 21:55:18.166106       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 21:55:18.249334       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.78.167"}
	I1119 21:55:18.264742       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.40.53"}
	E1119 21:55:21.797010       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56252: use of closed network connection
	I1119 21:55:21.949229       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.101.214.112"}
	E1119 21:55:37.077384       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45764: use of closed network connection
	E1119 21:55:38.444060       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45780: use of closed network connection
	I1119 22:04:33.251395       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5c68c7fa3b6621b449b16a162d9ec949954eaca52ecb93cf2eaba4dd114f8be6] <==
	I1119 21:54:36.641174       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 21:54:36.641211       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 21:54:36.641236       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 21:54:36.641290       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 21:54:36.641318       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 21:54:36.641285       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 21:54:36.641327       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 21:54:36.641357       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 21:54:36.641769       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 21:54:36.642763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 21:54:36.642862       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 21:54:36.642890       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 21:54:36.645121       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 21:54:36.646309       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 21:54:36.646404       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 21:54:36.646452       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 21:54:36.646530       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-037096"
	I1119 21:54:36.646573       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 21:54:36.661546       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1119 21:55:18.207095       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 21:55:18.211624       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 21:55:18.212853       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 21:55:18.215376       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 21:55:18.216612       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1119 21:55:18.221508       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [73f4b117336ce0a171cecd5c85d3fad04dcbf0953d33415ebbaba2fc5b7e646d] <==
	I1119 21:54:13.803927       1 serving.go:386] Generated self-signed cert in-memory
	I1119 21:54:14.011182       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1119 21:54:14.011202       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:54:14.012556       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1119 21:54:14.012550       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1119 21:54:14.012850       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1119 21:54:14.012946       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 21:54:14.020758       1 controllermanager.go:781] "Started controller" controller="serviceaccount-token-controller"
	I1119 21:54:14.020796       1 shared_informer.go:349] "Waiting for caches to sync" controller="tokens"
	I1119 21:54:22.488190       1 controllermanager.go:781] "Started controller" controller="clusterrole-aggregation-controller"
	I1119 21:54:22.488303       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I1119 21:54:22.488325       1 shared_informer.go:349] "Waiting for caches to sync" controller="ClusterRoleAggregator"
	F1119 21:54:22.500385       1 client_builder_dynamic.go:174] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/validatingadmissionpolicy-status-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [3484d2612d5ad9680032f7569dd88d6aed02f04ba88bb93edc5f0e7b3bbd8169] <==
	I1119 21:54:13.511737       1 server_linux.go:53] "Using iptables proxy"
	I1119 21:54:13.583580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 21:54:13.684105       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 21:54:13.684131       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 21:54:13.684196       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 21:54:13.702751       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 21:54:13.702810       1 server_linux.go:132] "Using iptables Proxier"
	I1119 21:54:13.708332       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 21:54:13.708955       1 server.go:527] "Version info" version="v1.34.1"
	I1119 21:54:13.708975       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:54:13.710650       1 config.go:106] "Starting endpoint slice config controller"
	I1119 21:54:13.710671       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 21:54:13.710716       1 config.go:309] "Starting node config controller"
	I1119 21:54:13.710731       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 21:54:13.710738       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 21:54:13.710744       1 config.go:200] "Starting service config controller"
	I1119 21:54:13.710757       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 21:54:13.710794       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 21:54:13.710864       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 21:54:13.810810       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 21:54:13.810849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 21:54:13.810936       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	E1119 21:54:33.264586       1 reflector.go:205] "Failed to watch" err="nodes \"functional-037096\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [9ac9e1f76d840ca53ff13fb4ac18ea49446120d4ea6a4cfd577a21bc0445f563] <==
	I1119 21:53:38.192264       1 server_linux.go:53] "Using iptables proxy"
	I1119 21:53:38.265311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 21:53:38.366237       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 21:53:38.366293       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1119 21:53:38.366377       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 21:53:38.387881       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 21:53:38.387922       1 server_linux.go:132] "Using iptables Proxier"
	I1119 21:53:38.392791       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 21:53:38.393174       1 server.go:527] "Version info" version="v1.34.1"
	I1119 21:53:38.393219       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:53:38.395712       1 config.go:200] "Starting service config controller"
	I1119 21:53:38.395734       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 21:53:38.395777       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 21:53:38.395784       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 21:53:38.395805       1 config.go:106] "Starting endpoint slice config controller"
	I1119 21:53:38.395841       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 21:53:38.395851       1 config.go:309] "Starting node config controller"
	I1119 21:53:38.395867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 21:53:38.395875       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 21:53:38.496678       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 21:53:38.496718       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 21:53:38.496717       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [e608774c0364dd4fd1d90305568a30090096f937aba61c939083af01194f1a56] <==
	E1119 21:53:30.097750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:53:30.097803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:53:30.097831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 21:53:30.097877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 21:53:30.097963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 21:53:30.097968       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 21:53:30.098075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:53:30.908335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 21:53:31.047557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 21:53:31.087789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 21:53:31.091806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 21:53:31.094597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 21:53:31.153573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 21:53:31.199502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 21:53:31.199658       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 21:53:31.231840       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 21:53:31.240662       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 21:53:31.248582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1119 21:53:31.693870       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 21:54:29.838991       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1119 21:54:29.838974       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 21:54:29.839015       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1119 21:54:29.839095       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1119 21:54:29.839106       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1119 21:54:29.839128       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ed9b1fca2847854e7da696e93f367bdc903cb86147120bc111380bd65f73c4b5] <==
	I1119 21:54:32.201482       1 serving.go:386] Generated self-signed cert in-memory
	W1119 21:54:33.240140       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 21:54:33.240174       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 21:54:33.240186       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 21:54:33.240195       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 21:54:33.273701       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 21:54:33.273726       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 21:54:33.276087       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 21:54:33.276131       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 21:54:33.276398       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 21:54:33.276465       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 21:54:33.376332       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:02:21 functional-037096 kubelet[4348]: E1119 22:02:21.419218    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:02:33 functional-037096 kubelet[4348]: E1119 22:02:33.419204    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:02:33 functional-037096 kubelet[4348]: E1119 22:02:33.419285    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:02:45 functional-037096 kubelet[4348]: E1119 22:02:45.419096    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:02:48 functional-037096 kubelet[4348]: E1119 22:02:48.419299    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:02:58 functional-037096 kubelet[4348]: E1119 22:02:58.419426    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:02:59 functional-037096 kubelet[4348]: E1119 22:02:59.419697    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:03:11 functional-037096 kubelet[4348]: E1119 22:03:11.419487    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:03:14 functional-037096 kubelet[4348]: E1119 22:03:14.419138    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:03:24 functional-037096 kubelet[4348]: E1119 22:03:24.418773    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:03:27 functional-037096 kubelet[4348]: E1119 22:03:27.419115    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:03:35 functional-037096 kubelet[4348]: E1119 22:03:35.419475    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:03:38 functional-037096 kubelet[4348]: E1119 22:03:38.418936    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:03:49 functional-037096 kubelet[4348]: E1119 22:03:49.419510    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:03:53 functional-037096 kubelet[4348]: E1119 22:03:53.419706    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:04:02 functional-037096 kubelet[4348]: E1119 22:04:02.419234    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:04:05 functional-037096 kubelet[4348]: E1119 22:04:05.419241    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:04:16 functional-037096 kubelet[4348]: E1119 22:04:16.419519    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:04:16 functional-037096 kubelet[4348]: E1119 22:04:16.419600    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:04:28 functional-037096 kubelet[4348]: E1119 22:04:28.418893    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:04:29 functional-037096 kubelet[4348]: E1119 22:04:29.419200    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:04:39 functional-037096 kubelet[4348]: E1119 22:04:39.419254    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:04:43 functional-037096 kubelet[4348]: E1119 22:04:43.418696    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	Nov 19 22:04:53 functional-037096 kubelet[4348]: E1119 22:04:53.418845    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nvn8h" podUID="939688e2-e79e-4b52-b3ad-1daf22d93d47"
	Nov 19 22:04:54 functional-037096 kubelet[4348]: E1119 22:04:54.419078    4348 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-ph2np" podUID="ac96a0d3-40d9-4a21-a1a9-836b5aee15bc"
	
	
	==> kubernetes-dashboard [203356beeba0b99a4f5801b895d405c0ed2ebfb98f603848ebffe4df1725a829] <==
	2025/11/19 21:55:23 Starting overwatch
	2025/11/19 21:55:23 Using namespace: kubernetes-dashboard
	2025/11/19 21:55:23 Using in-cluster config to connect to apiserver
	2025/11/19 21:55:23 Using secret token for csrf signing
	2025/11/19 21:55:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 21:55:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 21:55:23 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 21:55:23 Generating JWE encryption key
	2025/11/19 21:55:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 21:55:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 21:55:23 Initializing JWE encryption key from synchronized object
	2025/11/19 21:55:23 Creating in-cluster Sidecar client
	2025/11/19 21:55:23 Successful request to sidecar
	2025/11/19 21:55:23 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [552d9eb43d629ff4b6d7558fb3924d39cdd653713ec22a399163651f6c7afab3] <==
	W1119 22:04:33.904774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:35.907219       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:35.910604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:37.912896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:37.916060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:39.918551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:39.922205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:41.924707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:41.928255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:43.930608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:43.935183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:45.937651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:45.941720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:47.944269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:47.948584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:49.950764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:49.954642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:51.956747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:51.960079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:53.962771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:53.967196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:55.969804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:55.973006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:57.975829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:04:57.979711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cb6505a584f73b21aadbb798ac682632de18fa97c2e021b8614952fa567f183f] <==
	W1119 21:53:49.319905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:49.323756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 21:53:49.418641       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-037096_6e9042be-5a33-426f-a2d2-f984e4278b57!
	W1119 21:53:51.326454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:51.330243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:53.333419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:53.337513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:55.340024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:55.344076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:57.347015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:57.351557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:59.354317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:53:59.357882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:01.360756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:01.365805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:03.368286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:03.373110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:05.376401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:05.380019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:07.382747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:07.386165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:09.388809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:09.391907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:11.394579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 21:54:11.398677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-037096 -n functional-037096
helpers_test.go:269: (dbg) Run:  kubectl --context functional-037096 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-ph2np hello-node-connect-7d85dfc575-nvn8h
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-037096 describe pod busybox-mount hello-node-75c85bcc94-ph2np hello-node-connect-7d85dfc575-nvn8h
helpers_test.go:290: (dbg) kubectl --context functional-037096 describe pod busybox-mount hello-node-75c85bcc94-ph2np hello-node-connect-7d85dfc575-nvn8h:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-037096/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 21:55:08 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f3c44af3826070cc47f7f3b9e28ccb837a45bbe15e2322c405f353fde1aa67b9
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 19 Nov 2025 21:55:09 +0000
	      Finished:     Wed, 19 Nov 2025 21:55:09 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fp4z4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-fp4z4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-037096
	  Normal  Pulling    9m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 727ms (727ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m51s  kubelet            Created container: mount-munger
	  Normal  Started    9m51s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-ph2np
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-037096/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 21:55:01 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6v6n9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6v6n9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m58s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-ph2np to functional-037096
	  Normal   Pulling    7m6s (x5 over 9m58s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m6s (x5 over 9m58s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m52s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-nvn8h
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-037096/192.168.49.2
	Start Time:       Wed, 19 Nov 2025 21:54:57 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8jsnq (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-8jsnq:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nvn8h to functional-037096
	  Normal   Pulling    7m18s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m18s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m18s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-037096 image ls --format short --alsologtostderr: (2.267188441s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037096 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037096 image ls --format short --alsologtostderr:
I1119 21:55:27.259703   52768 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:27.260010   52768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:27.260023   52768 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:27.260028   52768 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:27.260298   52768 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
I1119 21:55:27.261122   52768 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:27.261219   52768 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:27.261591   52768 cli_runner.go:164] Run: docker container inspect functional-037096 --format={{.State.Status}}
I1119 21:55:27.280994   52768 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:27.281035   52768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037096
I1119 21:55:27.298271   52768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/functional-037096/id_rsa Username:docker}
I1119 21:55:27.394769   52768 ssh_runner.go:195] Run: sudo crictl images --output json
I1119 21:55:29.423924   52768 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.029116668s)
W1119 21:55:29.423999   52768 cache_images.go:736] Failed to list images for profile functional-037096 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1119 21:55:29.421314    7451 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="image:{}"
time="2025-11-19T21:55:29Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image load --daemon kicbase/echo-server:functional-037096 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-037096" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image load --daemon kicbase/echo-server:functional-037096 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-037096" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-037096
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image load --daemon kicbase/echo-server:functional-037096 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-037096" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image save kicbase/echo-server:functional-037096 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1119 21:55:01.249793   47746 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:55:01.249946   47746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:01.249957   47746 out.go:374] Setting ErrFile to fd 2...
	I1119 21:55:01.249963   47746 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:01.250157   47746 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:55:01.250682   47746 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:01.250806   47746 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:01.251172   47746 cli_runner.go:164] Run: docker container inspect functional-037096 --format={{.State.Status}}
	I1119 21:55:01.268515   47746 ssh_runner.go:195] Run: systemctl --version
	I1119 21:55:01.268565   47746 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037096
	I1119 21:55:01.284295   47746 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/functional-037096/id_rsa Username:docker}
	I1119 21:55:01.373582   47746 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1119 21:55:01.373625   47746 cache_images.go:255] Failed to load cached images for "functional-037096": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1119 21:55:01.373640   47746 cache_images.go:267] failed pushing to: functional-037096

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-037096
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image save --daemon kicbase/echo-server:functional-037096 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-037096
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-037096: exit status 1 (15.796068ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-037096

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-037096

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-037096 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-037096 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ph2np" [ac96a0d3-40d9-4a21-a1a9-836b5aee15bc] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-037096 -n functional-037096
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-19 22:05:02.115679768 +0000 UTC m=+1084.942702925
functional_test.go:1460: (dbg) Run:  kubectl --context functional-037096 describe po hello-node-75c85bcc94-ph2np -n default
functional_test.go:1460: (dbg) kubectl --context functional-037096 describe po hello-node-75c85bcc94-ph2np -n default:
Name:             hello-node-75c85bcc94-ph2np
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-037096/192.168.49.2
Start Time:       Wed, 19 Nov 2025 21:55:01 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6v6n9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6v6n9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-ph2np to functional-037096
Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-037096 logs hello-node-75c85bcc94-ph2np -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-037096 logs hello-node-75c85bcc94-ph2np -n default: exit status 1 (56.503741ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-ph2np" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-037096 logs hello-node-75c85bcc94-ph2np -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 service --namespace=default --https --url hello-node: exit status 115 (514.35947ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32732
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-037096 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 service hello-node --url --format={{.IP}}: exit status 115 (513.213232ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-037096 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 service hello-node --url: exit status 115 (514.797327ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32732
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-037096 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32732
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.51s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.24s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-538999 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-538999 --output=json --user=testUser: exit status 80 (2.239312269s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d1d8fefe-d1b7-484a-b782-c3438453c38f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-538999 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"07062f28-7f18-4e58-a7ed-b1b48c3d055b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-19T22:15:36Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"4b7e1868-efd2-4604-af9a-d599dd1c1624","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-538999 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.24s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-538999 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-538999 --output=json --user=testUser: exit status 80 (1.575500881s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c2963193-a0bc-43ca-a26a-c798f2be135e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-538999 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2033aa16-c45a-48cd-a2c9-af5dc0ddec11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-19T22:15:38Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"645ac75b-aba6-4d80-9db1-0ade44994fcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-538999 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.58s)

                                                
                                    
x
+
TestPause/serial/Pause (7.76s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-340203 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-340203 --alsologtostderr -v=5: exit status 80 (2.130901598s)

                                                
                                                
-- stdout --
	* Pausing node pause-340203 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:29:17.226131  197447 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:29:17.226234  197447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:29:17.226244  197447 out.go:374] Setting ErrFile to fd 2...
	I1119 22:29:17.226248  197447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:29:17.226499  197447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:29:17.226847  197447 out.go:368] Setting JSON to false
	I1119 22:29:17.226897  197447 mustload.go:66] Loading cluster: pause-340203
	I1119 22:29:17.227379  197447 config.go:182] Loaded profile config "pause-340203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:29:17.227857  197447 cli_runner.go:164] Run: docker container inspect pause-340203 --format={{.State.Status}}
	I1119 22:29:17.245376  197447 host.go:66] Checking if "pause-340203" exists ...
	I1119 22:29:17.245613  197447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:29:17.305706  197447 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-19 22:29:17.296229187 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:29:17.306299  197447 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-340203 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:29:17.307769  197447 out.go:179] * Pausing node pause-340203 ... 
	I1119 22:29:17.308861  197447 host.go:66] Checking if "pause-340203" exists ...
	I1119 22:29:17.309112  197447 ssh_runner.go:195] Run: systemctl --version
	I1119 22:29:17.309159  197447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-340203
	I1119 22:29:17.326248  197447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/pause-340203/id_rsa Username:docker}
	I1119 22:29:17.415705  197447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:29:17.426967  197447 pause.go:52] kubelet running: true
	I1119 22:29:17.427048  197447 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:29:17.563252  197447 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:29:17.563343  197447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:29:17.630160  197447 cri.go:89] found id: "2197a8a76c658581a09f830e15a64684aabb398c2af121782e6e249a1c9691fc"
	I1119 22:29:17.630180  197447 cri.go:89] found id: "0eabb384c878f6fbd11d62c0edc5b450fb1a92835676b3246704eb658f4b65aa"
	I1119 22:29:17.630184  197447 cri.go:89] found id: "feaa3aef65538a4f7c257c3c566ce1333f9b016219ef6f1d794101b05ece0c08"
	I1119 22:29:17.630187  197447 cri.go:89] found id: "f804c882f29f9621335011914f58804da767a84d82dc0993fa81751604c82124"
	I1119 22:29:17.630189  197447 cri.go:89] found id: "32da299595ba8ed38efc7cf713977e86ee07eb1fe3a0ac7c51f1f34b8a6e132e"
	I1119 22:29:17.630192  197447 cri.go:89] found id: "0224b50772a5fad0a10fb5948aaa6aeaae8ab5376e02974888b6e1494c563bce"
	I1119 22:29:17.630195  197447 cri.go:89] found id: "bf5482b53da28621e00f7bd15befc3b9f6a1c547a06579444ce1fd6e26181553"
	I1119 22:29:17.630197  197447 cri.go:89] found id: "44743529a36798e137bd2f9277293ceee4d3643bfa7cb9036938d2df1c3e52f5"
	I1119 22:29:17.630200  197447 cri.go:89] found id: ""
	I1119 22:29:17.630233  197447 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:29:17.641878  197447 retry.go:31] will retry after 279.990055ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:29:17Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:29:17.922402  197447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:29:17.935274  197447 pause.go:52] kubelet running: false
	I1119 22:29:17.935329  197447 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:29:18.079544  197447 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:29:18.079626  197447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:29:18.149740  197447 cri.go:89] found id: "2197a8a76c658581a09f830e15a64684aabb398c2af121782e6e249a1c9691fc"
	I1119 22:29:18.149766  197447 cri.go:89] found id: "0eabb384c878f6fbd11d62c0edc5b450fb1a92835676b3246704eb658f4b65aa"
	I1119 22:29:18.149771  197447 cri.go:89] found id: "feaa3aef65538a4f7c257c3c566ce1333f9b016219ef6f1d794101b05ece0c08"
	I1119 22:29:18.149775  197447 cri.go:89] found id: "f804c882f29f9621335011914f58804da767a84d82dc0993fa81751604c82124"
	I1119 22:29:18.149779  197447 cri.go:89] found id: "32da299595ba8ed38efc7cf713977e86ee07eb1fe3a0ac7c51f1f34b8a6e132e"
	I1119 22:29:18.149785  197447 cri.go:89] found id: "0224b50772a5fad0a10fb5948aaa6aeaae8ab5376e02974888b6e1494c563bce"
	I1119 22:29:18.149789  197447 cri.go:89] found id: "bf5482b53da28621e00f7bd15befc3b9f6a1c547a06579444ce1fd6e26181553"
	I1119 22:29:18.149793  197447 cri.go:89] found id: "44743529a36798e137bd2f9277293ceee4d3643bfa7cb9036938d2df1c3e52f5"
	I1119 22:29:18.149797  197447 cri.go:89] found id: ""
	I1119 22:29:18.149880  197447 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:29:18.162095  197447 retry.go:31] will retry after 197.539737ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:29:18Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:29:18.360549  197447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:29:18.379643  197447 pause.go:52] kubelet running: false
	I1119 22:29:18.379712  197447 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:29:18.493959  197447 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:29:18.494029  197447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:29:18.561472  197447 cri.go:89] found id: "2197a8a76c658581a09f830e15a64684aabb398c2af121782e6e249a1c9691fc"
	I1119 22:29:18.561496  197447 cri.go:89] found id: "0eabb384c878f6fbd11d62c0edc5b450fb1a92835676b3246704eb658f4b65aa"
	I1119 22:29:18.561500  197447 cri.go:89] found id: "feaa3aef65538a4f7c257c3c566ce1333f9b016219ef6f1d794101b05ece0c08"
	I1119 22:29:18.561503  197447 cri.go:89] found id: "f804c882f29f9621335011914f58804da767a84d82dc0993fa81751604c82124"
	I1119 22:29:18.561506  197447 cri.go:89] found id: "32da299595ba8ed38efc7cf713977e86ee07eb1fe3a0ac7c51f1f34b8a6e132e"
	I1119 22:29:18.561512  197447 cri.go:89] found id: "0224b50772a5fad0a10fb5948aaa6aeaae8ab5376e02974888b6e1494c563bce"
	I1119 22:29:18.561514  197447 cri.go:89] found id: "bf5482b53da28621e00f7bd15befc3b9f6a1c547a06579444ce1fd6e26181553"
	I1119 22:29:18.561517  197447 cri.go:89] found id: "44743529a36798e137bd2f9277293ceee4d3643bfa7cb9036938d2df1c3e52f5"
	I1119 22:29:18.561519  197447 cri.go:89] found id: ""
	I1119 22:29:18.561555  197447 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:29:18.573473  197447 retry.go:31] will retry after 481.792912ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:29:18Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:29:19.056041  197447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:29:19.073893  197447 pause.go:52] kubelet running: false
	I1119 22:29:19.073976  197447 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:29:19.204327  197447 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:29:19.204420  197447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:29:19.277640  197447 cri.go:89] found id: "2197a8a76c658581a09f830e15a64684aabb398c2af121782e6e249a1c9691fc"
	I1119 22:29:19.277662  197447 cri.go:89] found id: "0eabb384c878f6fbd11d62c0edc5b450fb1a92835676b3246704eb658f4b65aa"
	I1119 22:29:19.277668  197447 cri.go:89] found id: "feaa3aef65538a4f7c257c3c566ce1333f9b016219ef6f1d794101b05ece0c08"
	I1119 22:29:19.277673  197447 cri.go:89] found id: "f804c882f29f9621335011914f58804da767a84d82dc0993fa81751604c82124"
	I1119 22:29:19.277678  197447 cri.go:89] found id: "32da299595ba8ed38efc7cf713977e86ee07eb1fe3a0ac7c51f1f34b8a6e132e"
	I1119 22:29:19.277682  197447 cri.go:89] found id: "0224b50772a5fad0a10fb5948aaa6aeaae8ab5376e02974888b6e1494c563bce"
	I1119 22:29:19.277685  197447 cri.go:89] found id: "bf5482b53da28621e00f7bd15befc3b9f6a1c547a06579444ce1fd6e26181553"
	I1119 22:29:19.277689  197447 cri.go:89] found id: "44743529a36798e137bd2f9277293ceee4d3643bfa7cb9036938d2df1c3e52f5"
	I1119 22:29:19.277698  197447 cri.go:89] found id: ""
	I1119 22:29:19.277746  197447 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:29:19.292257  197447 out.go:203] 
	W1119 22:29:19.293398  197447 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:29:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:29:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:29:19.293419  197447 out.go:285] * 
	* 
	W1119 22:29:19.297255  197447 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:29:19.298276  197447 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-340203 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-340203
helpers_test.go:243: (dbg) docker inspect pause-340203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d",
	        "Created": "2025-11-19T22:28:25.91304757Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:28:25.968105821Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d/hostname",
	        "HostsPath": "/var/lib/docker/containers/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d/hosts",
	        "LogPath": "/var/lib/docker/containers/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d-json.log",
	        "Name": "/pause-340203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-340203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-340203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d",
	                "LowerDir": "/var/lib/docker/overlay2/625cd5d4ce28fb509b54ba27921126c6a55752a2b9fd60e181132bc11db1961b-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/625cd5d4ce28fb509b54ba27921126c6a55752a2b9fd60e181132bc11db1961b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/625cd5d4ce28fb509b54ba27921126c6a55752a2b9fd60e181132bc11db1961b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/625cd5d4ce28fb509b54ba27921126c6a55752a2b9fd60e181132bc11db1961b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-340203",
	                "Source": "/var/lib/docker/volumes/pause-340203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-340203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-340203",
	                "name.minikube.sigs.k8s.io": "pause-340203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b1a93c5735e42da9de6d8d3a4137942d2c088b6d639e9e89abc3c29c5a523d50",
	            "SandboxKey": "/var/run/docker/netns/b1a93c5735e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-340203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02703e1c05d7cdab9296ab31957278e1c510dccf50d6d4192ebdbe37994b2273",
	                    "EndpointID": "e1502af3a4e62721637ce2893a9daa165aa5ea1ba59893e5427c35a8963e3d4d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d6:a6:87:9a:4d:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-340203",
	                        "25ec08c8de42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-340203 -n pause-340203
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-340203 -n pause-340203: exit status 2 (360.646257ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-340203 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-849026 --memory=3072 --driver=docker  --container-runtime=crio                                            │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │ 19 Nov 25 22:26 UTC │
	│ stop    │ -p scheduled-stop-849026 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --cancel-scheduled                                                                                 │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │ 19 Nov 25 22:26 UTC │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:27 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:27 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:27 UTC │ 19 Nov 25 22:27 UTC │
	│ delete  │ -p scheduled-stop-849026                                                                                                    │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:27 UTC │ 19 Nov 25 22:28 UTC │
	│ start   │ -p insufficient-storage-060026 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-060026 │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │                     │
	│ delete  │ -p insufficient-storage-060026                                                                                              │ insufficient-storage-060026 │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:28 UTC │
	│ start   │ -p offline-crio-328669 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-328669         │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │                     │
	│ start   │ -p force-systemd-env-630141 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-630141    │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:28 UTC │
	│ start   │ -p pause-340203 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-340203                │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:29 UTC │
	│ start   │ -p stopped-upgrade-459977 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-459977      │ jenkins │ v1.32.0 │ 19 Nov 25 22:28 UTC │                     │
	│ delete  │ -p force-systemd-env-630141                                                                                                 │ force-systemd-env-630141    │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:28 UTC │
	│ start   │ -p force-systemd-flag-631541 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-631541   │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:29 UTC │
	│ start   │ -p pause-340203 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-340203                │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │ 19 Nov 25 22:29 UTC │
	│ ssh     │ force-systemd-flag-631541 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-631541   │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │ 19 Nov 25 22:29 UTC │
	│ delete  │ -p force-systemd-flag-631541                                                                                                │ force-systemd-flag-631541   │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │ 19 Nov 25 22:29 UTC │
	│ pause   │ -p pause-340203 --alsologtostderr -v=5                                                                                      │ pause-340203                │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │                     │
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-855818      │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:29:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:29:18.938346  198153 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:29:18.938438  198153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:29:18.938442  198153 out.go:374] Setting ErrFile to fd 2...
	I1119 22:29:18.938444  198153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:29:18.938617  198153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:29:18.939111  198153 out.go:368] Setting JSON to false
	I1119 22:29:18.940181  198153 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4307,"bootTime":1763587052,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:29:18.940226  198153 start.go:143] virtualization: kvm guest
	I1119 22:29:18.942091  198153 out.go:179] * [cert-expiration-855818] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:29:18.943620  198153 notify.go:221] Checking for updates...
	I1119 22:29:18.943649  198153 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:29:18.944823  198153 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:29:18.946238  198153 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:29:18.947345  198153 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:29:18.948453  198153 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:29:18.950339  198153 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:29:18.951760  198153 config.go:182] Loaded profile config "offline-crio-328669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:29:18.951894  198153 config.go:182] Loaded profile config "pause-340203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:29:18.951969  198153 config.go:182] Loaded profile config "stopped-upgrade-459977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1119 22:29:18.952030  198153 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:29:18.975443  198153 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:29:18.975509  198153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:29:19.035675  198153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:29:19.023301897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:29:19.035760  198153 docker.go:319] overlay module found
	I1119 22:29:19.037538  198153 out.go:179] * Using the docker driver based on user configuration
	I1119 22:29:19.038649  198153 start.go:309] selected driver: docker
	I1119 22:29:19.038654  198153 start.go:930] validating driver "docker" against <nil>
	I1119 22:29:19.038663  198153 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:29:19.039199  198153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:29:19.099137  198153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:29:19.089144296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:29:19.099305  198153 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:29:19.099482  198153 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 22:29:19.102499  198153 out.go:179] * Using Docker driver with root privileges
	I1119 22:29:19.103573  198153 cni.go:84] Creating CNI manager for ""
	I1119 22:29:19.103621  198153 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:29:19.103627  198153 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:29:19.103681  198153 start.go:353] cluster config:
	{Name:cert-expiration-855818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-855818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:29:19.104953  198153 out.go:179] * Starting "cert-expiration-855818" primary control-plane node in "cert-expiration-855818" cluster
	I1119 22:29:19.105811  198153 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:29:19.106907  198153 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:29:19.107972  198153 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:29:19.107991  198153 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:29:19.107998  198153 cache.go:65] Caching tarball of preloaded images
	I1119 22:29:19.108059  198153 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:29:19.108115  198153 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:29:19.108125  198153 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:29:19.108207  198153 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/config.json ...
	I1119 22:29:19.108248  198153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/config.json: {Name:mk3f18cef7914e5c6b34f73b8c6ae9d7d032d65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:29:19.134415  198153 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:29:19.134428  198153 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:29:19.134448  198153 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:29:19.134473  198153 start.go:360] acquireMachinesLock for cert-expiration-855818: {Name:mk20f5ee9742fbb36558c555d8b81ba30871f08f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:29:19.134569  198153 start.go:364] duration metric: took 81.81µs to acquireMachinesLock for "cert-expiration-855818"
	I1119 22:29:19.134591  198153 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-855818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-855818 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:29:19.134643  198153 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.555902338Z" level=info msg="Conmon does support the --sync option"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.555925623Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.555942816Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.556733568Z" level=info msg="Conmon does support the --sync option"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.556750154Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.561790615Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.561823435Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.562336602Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.562718085Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.562768968Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.651851155Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-r8gxv Namespace:kube-system ID:0d9315c65cfc51dffb5f0101e0efd54d384c5f3d513d363f641490a3827d21f9 UID:6faed24f-fb12-4829-bdf8-7dcdd16043c5 NetNS:/var/run/netns/b2c287a4-eaf8-4a90-b199-d76fbd97bafe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a7e8}] Aliases:map[]}"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.652036333Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-r8gxv for CNI network kindnet (type=ptp)"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.652375964Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-wf6s7 Namespace:kube-system ID:21916393d08f831910068c6c8a486cbe28ef0941fea323f77bfe1b854190f485 UID:f24831e5-5f4c-4a00-82b1-c46fa5106fa9 NetNS:/var/run/netns/c1749812-6775-4110-bfd4-a501005a1745 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aa00}] Aliases:map[]}"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.65253138Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-wf6s7 for CNI network kindnet (type=ptp)"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653001021Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653027973Z" level=info msg="Starting seccomp notifier watcher"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653087575Z" level=info msg="Create NRI interface"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653192515Z" level=info msg="built-in NRI default validator is disabled"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653207252Z" level=info msg="runtime interface created"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653217594Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653223707Z" level=info msg="runtime interface starting up..."
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653229374Z" level=info msg="starting plugins..."
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653244759Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653679335Z" level=info msg="No systemd watchdog enabled"
	Nov 19 22:29:13 pause-340203 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2197a8a76c658       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   0                   0d9315c65cfc5       coredns-66bc5c9577-r8gxv               kube-system
	0eabb384c878f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   0                   21916393d08f8       coredns-66bc5c9577-wf6s7               kube-system
	feaa3aef65538       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   2dfbd5b33fc50       kube-proxy-gbvbl                       kube-system
	f804c882f29f9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   629acf3da4c93       kindnet-wsr7k                          kube-system
	32da299595ba8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   40 seconds ago      Running             kube-controller-manager   0                   4790c36a92f8b       kube-controller-manager-pause-340203   kube-system
	0224b50772a5f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   40 seconds ago      Running             etcd                      0                   bdaa1493b2ec6       etcd-pause-340203                      kube-system
	bf5482b53da28       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   40 seconds ago      Running             kube-apiserver            0                   8e38f50de37bf       kube-apiserver-pause-340203            kube-system
	44743529a3679       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   40 seconds ago      Running             kube-scheduler            0                   b1ec79dd57a70       kube-scheduler-pause-340203            kube-system
	
	
	==> coredns [0eabb384c878f6fbd11d62c0edc5b450fb1a92835676b3246704eb658f4b65aa] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43151 - 25177 "HINFO IN 7320108873066975651.7178136734454026999. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.508165931s
	
	
	==> coredns [2197a8a76c658581a09f830e15a64684aabb398c2af121782e6e249a1c9691fc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39301 - 39075 "HINFO IN 3943308055213410470.8536799390945331853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066890381s
	
	
	==> describe nodes <==
	Name:               pause-340203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-340203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=pause-340203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_28_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:28:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-340203
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:29:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:29:03 +0000   Wed, 19 Nov 2025 22:28:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:29:03 +0000   Wed, 19 Nov 2025 22:28:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:29:03 +0000   Wed, 19 Nov 2025 22:28:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:29:03 +0000   Wed, 19 Nov 2025 22:29:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-340203
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                ae45d8dc-a3e2-452e-be1c-f98af2273eaf
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r8gxv                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 coredns-66bc5c9577-wf6s7                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-pause-340203                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-wsr7k                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-340203             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-pause-340203    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-gbvbl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-340203             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 35s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s   kubelet          Node pause-340203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s   kubelet          Node pause-340203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s   kubelet          Node pause-340203 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s   node-controller  Node pause-340203 event: Registered Node pause-340203 in Controller
	  Normal  NodeReady                17s   kubelet          Node pause-340203 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [0224b50772a5fad0a10fb5948aaa6aeaae8ab5376e02974888b6e1494c563bce] <==
	{"level":"info","ts":"2025-11-19T22:28:51.947803Z","caller":"traceutil/trace.go:172","msg":"trace[180581103] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-wsr7k; range_end:; response_count:1; response_revision:334; }","duration":"151.528161ms","start":"2025-11-19T22:28:51.796258Z","end":"2025-11-19T22:28:51.947786Z","steps":["trace[180581103] 'agreement among raft nodes before linearized reading'  (duration: 77.877655ms)","trace[180581103] 'range keys from in-memory index tree'  (duration: 73.532148ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:28:51.948701Z","caller":"traceutil/trace.go:172","msg":"trace[939917034] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"230.663463ms","start":"2025-11-19T22:28:51.718019Z","end":"2025-11-19T22:28:51.948683Z","steps":["trace[939917034] 'process raft request'  (duration: 156.178452ms)","trace[939917034] 'compare'  (duration: 73.552106ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:28:51.948838Z","caller":"traceutil/trace.go:172","msg":"trace[1905994579] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"222.009484ms","start":"2025-11-19T22:28:51.726797Z","end":"2025-11-19T22:28:51.948806Z","steps":["trace[1905994579] 'process raft request'  (duration: 221.969175ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:51.949040Z","caller":"traceutil/trace.go:172","msg":"trace[1169148167] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"227.698702ms","start":"2025-11-19T22:28:51.721332Z","end":"2025-11-19T22:28:51.949031Z","steps":["trace[1169148167] 'process raft request'  (duration: 227.379285ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:51.949271Z","caller":"traceutil/trace.go:172","msg":"trace[1216816664] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"227.885036ms","start":"2025-11-19T22:28:51.721376Z","end":"2025-11-19T22:28:51.949261Z","steps":["trace[1216816664] 'process raft request'  (duration: 227.364959ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:51.948757Z","caller":"traceutil/trace.go:172","msg":"trace[194192318] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"227.494205ms","start":"2025-11-19T22:28:51.721249Z","end":"2025-11-19T22:28:51.948743Z","steps":["trace[194192318] 'process raft request'  (duration: 227.371286ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:51.956989Z","caller":"traceutil/trace.go:172","msg":"trace[588021865] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"202.575244ms","start":"2025-11-19T22:28:51.754398Z","end":"2025-11-19T22:28:51.956973Z","steps":["trace[588021865] 'process raft request'  (duration: 202.383261ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:52.110241Z","caller":"traceutil/trace.go:172","msg":"trace[181314465] linearizableReadLoop","detail":"{readStateIndex:355; appliedIndex:355; }","duration":"133.570957ms","start":"2025-11-19T22:28:51.976649Z","end":"2025-11-19T22:28:52.110220Z","steps":["trace[181314465] 'read index received'  (duration: 133.562572ms)","trace[181314465] 'applied index is now lower than readState.Index'  (duration: 7.267µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:28:52.248654Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"271.987165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4299"}
	{"level":"info","ts":"2025-11-19T22:28:52.248781Z","caller":"traceutil/trace.go:172","msg":"trace[164719357] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:344; }","duration":"272.117949ms","start":"2025-11-19T22:28:51.976644Z","end":"2025-11-19T22:28:52.248762Z","steps":["trace[164719357] 'agreement among raft nodes before linearized reading'  (duration: 133.662436ms)","trace[164719357] 'range keys from in-memory index tree'  (duration: 138.234813ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:28:52.248955Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.481711ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356760491865959 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-gbvbl.187988f8ea17deeb\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-gbvbl.187988f8ea17deeb\" value_size:611 lease:6414984723637089852 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:28:52.249072Z","caller":"traceutil/trace.go:172","msg":"trace[1723513530] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"278.161646ms","start":"2025-11-19T22:28:51.970896Z","end":"2025-11-19T22:28:52.249058Z","steps":["trace[1723513530] 'process raft request'  (duration: 139.355216ms)","trace[1723513530] 'compare'  (duration: 138.387422ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:28:52.356960Z","caller":"traceutil/trace.go:172","msg":"trace[382219547] linearizableReadLoop","detail":"{readStateIndex:356; appliedIndex:356; }","duration":"246.649428ms","start":"2025-11-19T22:28:52.110290Z","end":"2025-11-19T22:28:52.356940Z","steps":["trace[382219547] 'read index received'  (duration: 246.640506ms)","trace[382219547] 'applied index is now lower than readState.Index'  (duration: 7.428µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:28:52.366941Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"388.627271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-gbvbl\" limit:1 ","response":"range_response_count:1 size:3429"}
	{"level":"info","ts":"2025-11-19T22:28:52.366999Z","caller":"traceutil/trace.go:172","msg":"trace[1769053483] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-gbvbl; range_end:; response_count:1; response_revision:345; }","duration":"388.699044ms","start":"2025-11-19T22:28:51.978288Z","end":"2025-11-19T22:28:52.366987Z","steps":["trace[1769053483] 'agreement among raft nodes before linearized reading'  (duration: 378.721408ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:28:52.367030Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:28:51.978276Z","time spent":"388.745929ms","remote":"127.0.0.1:57470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":3452,"request content":"key:\"/registry/pods/kube-system/kube-proxy-gbvbl\" limit:1 "}
	{"level":"warn","ts":"2025-11-19T22:28:52.366944Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"320.661376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-11-19T22:28:52.367067Z","caller":"traceutil/trace.go:172","msg":"trace[907427385] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"395.844194ms","start":"2025-11-19T22:28:51.971210Z","end":"2025-11-19T22:28:52.367054Z","steps":["trace[907427385] 'process raft request'  (duration: 385.829833ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:52.367084Z","caller":"traceutil/trace.go:172","msg":"trace[1581404332] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:345; }","duration":"320.815608ms","start":"2025-11-19T22:28:52.046254Z","end":"2025-11-19T22:28:52.367070Z","steps":["trace[1581404332] 'agreement among raft nodes before linearized reading'  (duration: 310.770311ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:28:52.367122Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:28:52.046240Z","time spent":"320.869318ms","remote":"127.0.0.1:57554","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":208,"request content":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 "}
	{"level":"warn","ts":"2025-11-19T22:28:52.367154Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:28:51.971194Z","time spent":"395.899148ms","remote":"127.0.0.1:58528","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4323,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:300 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4274 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-11-19T22:28:52.366968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"380.406129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-340203\" limit:1 ","response":"range_response_count:1 size:5559"}
	{"level":"info","ts":"2025-11-19T22:28:52.367199Z","caller":"traceutil/trace.go:172","msg":"trace[318206098] range","detail":"{range_begin:/registry/minions/pause-340203; range_end:; response_count:1; response_revision:345; }","duration":"380.621796ms","start":"2025-11-19T22:28:51.986555Z","end":"2025-11-19T22:28:52.367176Z","steps":["trace[318206098] 'agreement among raft nodes before linearized reading'  (duration: 370.470395ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:28:52.367279Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:28:51.986541Z","time spent":"380.723106ms","remote":"127.0.0.1:57430","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":5582,"request content":"key:\"/registry/minions/pause-340203\" limit:1 "}
	{"level":"info","ts":"2025-11-19T22:28:52.369725Z","caller":"traceutil/trace.go:172","msg":"trace[1166432015] transaction","detail":"{read_only:false; number_of_response:1; response_revision:346; }","duration":"117.074495ms","start":"2025-11-19T22:28:52.252629Z","end":"2025-11-19T22:28:52.369704Z","steps":["trace[1166432015] 'process raft request'  (duration: 117.013309ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:29:20 up  1:11,  0 user,  load average: 4.33, 1.98, 1.29
	Linux pause-340203 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f804c882f29f9621335011914f58804da767a84d82dc0993fa81751604c82124] <==
	I1119 22:28:52.899444       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:28:52.915092       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:28:52.915243       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:28:52.915264       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:28:52.915286       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:28:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:28:53.099997       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:28:53.100058       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:28:53.100071       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:28:53.215479       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:28:53.515133       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:28:53.515209       1 metrics.go:72] Registering metrics
	I1119 22:28:53.515266       1 controller.go:711] "Syncing nftables rules"
	I1119 22:29:03.100516       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:29:03.100573       1 main.go:301] handling current node
	I1119 22:29:13.107429       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:29:13.107454       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bf5482b53da28621e00f7bd15befc3b9f6a1c547a06579444ce1fd6e26181553] <==
	I1119 22:28:42.973469       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:28:42.975561       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:28:42.981055       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:28:42.983076       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:28:42.989731       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:28:42.990052       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:28:43.003943       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:28:43.876385       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:28:43.880383       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:28:43.880452       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:28:44.343082       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:28:44.386187       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:28:44.478158       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:28:44.484657       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:28:44.485721       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:28:44.490010       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:28:44.896472       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:28:45.243357       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:28:45.253277       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:28:45.261192       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:28:50.246788       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:28:50.827140       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:28:50.868525       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:28:51.090802       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:28:51.090802       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [32da299595ba8ed38efc7cf713977e86ee07eb1fe3a0ac7c51f1f34b8a6e132e] <==
	I1119 22:28:49.893573       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:28:49.893667       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-340203"
	I1119 22:28:49.893722       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:28:49.894825       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 22:28:49.894863       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:28:49.894914       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:28:49.894935       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:28:49.894981       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:28:49.894947       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:28:49.894949       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:28:49.895313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:28:49.895525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:28:49.895593       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:28:49.896438       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:28:49.896451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:28:49.898698       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:28:49.898762       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:28:49.898806       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:28:49.898833       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:28:49.898841       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:28:49.899100       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:28:49.910065       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:28:49.912879       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-340203" podCIDRs=["10.244.0.0/24"]
	I1119 22:28:49.918917       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:29:04.896676       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [feaa3aef65538a4f7c257c3c566ce1333f9b016219ef6f1d794101b05ece0c08] <==
	I1119 22:28:52.774559       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:28:52.832889       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:28:52.933101       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:28:52.933145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:28:52.933249       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:28:52.953130       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:28:52.953199       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:28:52.961646       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:28:52.962070       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:28:52.962108       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:28:52.964058       1 config.go:200] "Starting service config controller"
	I1119 22:28:52.964074       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:28:52.964104       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:28:52.964110       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:28:52.964127       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:28:52.965018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:28:52.964588       1 config.go:309] "Starting node config controller"
	I1119 22:28:52.965045       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:28:52.965050       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:28:53.065020       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:28:53.065038       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:28:53.065068       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [44743529a36798e137bd2f9277293ceee4d3643bfa7cb9036938d2df1c3e52f5] <==
	E1119 22:28:42.949735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:28:42.949807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:28:42.949851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:28:42.949905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:28:42.949996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:28:42.950001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:28:42.950020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:28:42.950060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:28:42.950111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:28:42.950149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:28:42.950168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:28:42.950332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:28:43.846358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:28:43.850504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:28:43.864748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:28:43.889009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:28:43.897019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:28:43.928294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:28:43.959943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:28:43.988316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:28:44.103093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:28:44.118391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:28:44.143614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:28:44.184976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1119 22:28:44.545574       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:29:10 pause-340203 kubelet[1297]: W1119 22:29:10.141864    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.141997    1297 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.142043    1297 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.190216    1297 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.190289    1297 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.190308    1297 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:10 pause-340203 kubelet[1297]: W1119 22:29:10.242782    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:10 pause-340203 kubelet[1297]: W1119 22:29:10.382137    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:10 pause-340203 kubelet[1297]: W1119 22:29:10.658178    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.116530    1297 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.116662    1297 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.116695    1297 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.116714    1297 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: W1119 22:29:11.135727    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.191347    1297 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.191399    1297 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.191410    1297 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: W1119 22:29:11.796526    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:12 pause-340203 kubelet[1297]: E1119 22:29:12.192473    1297 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 19 22:29:12 pause-340203 kubelet[1297]: E1119 22:29:12.192530    1297 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:12 pause-340203 kubelet[1297]: E1119 22:29:12.192546    1297 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:17 pause-340203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:29:17 pause-340203 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:29:17 pause-340203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:29:17 pause-340203 systemd[1]: kubelet.service: Consumed 1.309s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-340203 -n pause-340203
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-340203 -n pause-340203: exit status 2 (347.653325ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-340203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-340203
helpers_test.go:243: (dbg) docker inspect pause-340203:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d",
	        "Created": "2025-11-19T22:28:25.91304757Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 185184,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:28:25.968105821Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d/hostname",
	        "HostsPath": "/var/lib/docker/containers/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d/hosts",
	        "LogPath": "/var/lib/docker/containers/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d/25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d-json.log",
	        "Name": "/pause-340203",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-340203:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-340203",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "25ec08c8de42fef3ba0a849ffa6df1edbc9b1a4ea2251dc45f90c5adee36485d",
	                "LowerDir": "/var/lib/docker/overlay2/625cd5d4ce28fb509b54ba27921126c6a55752a2b9fd60e181132bc11db1961b-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/625cd5d4ce28fb509b54ba27921126c6a55752a2b9fd60e181132bc11db1961b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/625cd5d4ce28fb509b54ba27921126c6a55752a2b9fd60e181132bc11db1961b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/625cd5d4ce28fb509b54ba27921126c6a55752a2b9fd60e181132bc11db1961b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-340203",
	                "Source": "/var/lib/docker/volumes/pause-340203/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-340203",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-340203",
	                "name.minikube.sigs.k8s.io": "pause-340203",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b1a93c5735e42da9de6d8d3a4137942d2c088b6d639e9e89abc3c29c5a523d50",
	            "SandboxKey": "/var/run/docker/netns/b1a93c5735e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-340203": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "02703e1c05d7cdab9296ab31957278e1c510dccf50d6d4192ebdbe37994b2273",
	                    "EndpointID": "e1502af3a4e62721637ce2893a9daa165aa5ea1ba59893e5427c35a8963e3d4d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d6:a6:87:9a:4d:16",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-340203",
	                        "25ec08c8de42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-340203 -n pause-340203
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-340203 -n pause-340203: exit status 2 (336.122719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-340203 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-340203 logs -n 25: (2.972042814s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-849026 --memory=3072 --driver=docker  --container-runtime=crio                                            │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │ 19 Nov 25 22:26 UTC │
	│ stop    │ -p scheduled-stop-849026 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 5m -v=5 --alsologtostderr                                                               │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --cancel-scheduled                                                                                 │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:26 UTC │ 19 Nov 25 22:26 UTC │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:27 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:27 UTC │                     │
	│ stop    │ -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:27 UTC │ 19 Nov 25 22:27 UTC │
	│ delete  │ -p scheduled-stop-849026                                                                                                    │ scheduled-stop-849026       │ jenkins │ v1.37.0 │ 19 Nov 25 22:27 UTC │ 19 Nov 25 22:28 UTC │
	│ start   │ -p insufficient-storage-060026 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-060026 │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │                     │
	│ delete  │ -p insufficient-storage-060026                                                                                              │ insufficient-storage-060026 │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:28 UTC │
	│ start   │ -p offline-crio-328669 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-328669         │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │                     │
	│ start   │ -p force-systemd-env-630141 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-630141    │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:28 UTC │
	│ start   │ -p pause-340203 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-340203                │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:29 UTC │
	│ start   │ -p stopped-upgrade-459977 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-459977      │ jenkins │ v1.32.0 │ 19 Nov 25 22:28 UTC │                     │
	│ delete  │ -p force-systemd-env-630141                                                                                                 │ force-systemd-env-630141    │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:28 UTC │
	│ start   │ -p force-systemd-flag-631541 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-631541   │ jenkins │ v1.37.0 │ 19 Nov 25 22:28 UTC │ 19 Nov 25 22:29 UTC │
	│ start   │ -p pause-340203 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-340203                │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │ 19 Nov 25 22:29 UTC │
	│ ssh     │ force-systemd-flag-631541 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                        │ force-systemd-flag-631541   │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │ 19 Nov 25 22:29 UTC │
	│ delete  │ -p force-systemd-flag-631541                                                                                                │ force-systemd-flag-631541   │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │ 19 Nov 25 22:29 UTC │
	│ pause   │ -p pause-340203 --alsologtostderr -v=5                                                                                      │ pause-340203                │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │                     │
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-855818      │ jenkins │ v1.37.0 │ 19 Nov 25 22:29 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:29:18
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:29:18.938346  198153 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:29:18.938438  198153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:29:18.938442  198153 out.go:374] Setting ErrFile to fd 2...
	I1119 22:29:18.938444  198153 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:29:18.938617  198153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:29:18.939111  198153 out.go:368] Setting JSON to false
	I1119 22:29:18.940181  198153 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4307,"bootTime":1763587052,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:29:18.940226  198153 start.go:143] virtualization: kvm guest
	I1119 22:29:18.942091  198153 out.go:179] * [cert-expiration-855818] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:29:18.943620  198153 notify.go:221] Checking for updates...
	I1119 22:29:18.943649  198153 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:29:18.944823  198153 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:29:18.946238  198153 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:29:18.947345  198153 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:29:18.948453  198153 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:29:18.950339  198153 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:29:18.951760  198153 config.go:182] Loaded profile config "offline-crio-328669": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:29:18.951894  198153 config.go:182] Loaded profile config "pause-340203": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:29:18.951969  198153 config.go:182] Loaded profile config "stopped-upgrade-459977": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1119 22:29:18.952030  198153 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:29:18.975443  198153 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:29:18.975509  198153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:29:19.035675  198153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:29:19.023301897 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:29:19.035760  198153 docker.go:319] overlay module found
	I1119 22:29:19.037538  198153 out.go:179] * Using the docker driver based on user configuration
	I1119 22:29:19.038649  198153 start.go:309] selected driver: docker
	I1119 22:29:19.038654  198153 start.go:930] validating driver "docker" against <nil>
	I1119 22:29:19.038663  198153 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:29:19.039199  198153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:29:19.099137  198153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:29:19.089144296 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:29:19.099305  198153 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:29:19.099482  198153 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 22:29:19.102499  198153 out.go:179] * Using Docker driver with root privileges
	I1119 22:29:19.103573  198153 cni.go:84] Creating CNI manager for ""
	I1119 22:29:19.103621  198153 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:29:19.103627  198153 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:29:19.103681  198153 start.go:353] cluster config:
	{Name:cert-expiration-855818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-855818 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:29:19.104953  198153 out.go:179] * Starting "cert-expiration-855818" primary control-plane node in "cert-expiration-855818" cluster
	I1119 22:29:19.105811  198153 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:29:19.106907  198153 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:29:19.107972  198153 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:29:19.107991  198153 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:29:19.107998  198153 cache.go:65] Caching tarball of preloaded images
	I1119 22:29:19.108059  198153 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:29:19.108115  198153 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:29:19.108125  198153 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:29:19.108207  198153 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/config.json ...
	I1119 22:29:19.108248  198153 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/config.json: {Name:mk3f18cef7914e5c6b34f73b8c6ae9d7d032d65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:29:19.134415  198153 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:29:19.134428  198153 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:29:19.134448  198153 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:29:19.134473  198153 start.go:360] acquireMachinesLock for cert-expiration-855818: {Name:mk20f5ee9742fbb36558c555d8b81ba30871f08f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:29:19.134569  198153 start.go:364] duration metric: took 81.81µs to acquireMachinesLock for "cert-expiration-855818"
	I1119 22:29:19.134591  198153 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-855818 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-855818 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:29:19.134643  198153 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:29:18.254792  184119 out.go:204]   - Generating certificates and keys ...
	I1119 22:29:18.254941  184119 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1119 22:29:18.255048  184119 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1119 22:29:18.446146  184119 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:29:18.580570  184119 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:29:18.768379  184119 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:29:18.818573  184119 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1119 22:29:19.116793  184119 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1119 22:29:19.117046  184119 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost stopped-upgrade-459977] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:29:19.375199  184119 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1119 22:29:19.375393  184119 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost stopped-upgrade-459977] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:29:19.548022  184119 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:29:19.994107  184119 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:29:20.113543  184119 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1119 22:29:20.113670  184119 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:29:20.327374  184119 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:29:20.473578  184119 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:29:20.723331  184119 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:29:21.400301  184119 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:29:21.401050  184119 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:29:21.413483  184119 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.555902338Z" level=info msg="Conmon does support the --sync option"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.555925623Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.555942816Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.556733568Z" level=info msg="Conmon does support the --sync option"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.556750154Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.561790615Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.561823435Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.562336602Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.562718085Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.562768968Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.651851155Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-r8gxv Namespace:kube-system ID:0d9315c65cfc51dffb5f0101e0efd54d384c5f3d513d363f641490a3827d21f9 UID:6faed24f-fb12-4829-bdf8-7dcdd16043c5 NetNS:/var/run/netns/b2c287a4-eaf8-4a90-b199-d76fbd97bafe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008a7e8}] Aliases:map[]}"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.652036333Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-r8gxv for CNI network kindnet (type=ptp)"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.652375964Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-wf6s7 Namespace:kube-system ID:21916393d08f831910068c6c8a486cbe28ef0941fea323f77bfe1b854190f485 UID:f24831e5-5f4c-4a00-82b1-c46fa5106fa9 NetNS:/var/run/netns/c1749812-6775-4110-bfd4-a501005a1745 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aa00}] Aliases:map[]}"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.65253138Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-wf6s7 for CNI network kindnet (type=ptp)"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653001021Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653027973Z" level=info msg="Starting seccomp notifier watcher"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653087575Z" level=info msg="Create NRI interface"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653192515Z" level=info msg="built-in NRI default validator is disabled"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653207252Z" level=info msg="runtime interface created"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653217594Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653223707Z" level=info msg="runtime interface starting up..."
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653229374Z" level=info msg="starting plugins..."
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653244759Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 19 22:29:13 pause-340203 crio[2207]: time="2025-11-19T22:29:13.653679335Z" level=info msg="No systemd watchdog enabled"
	Nov 19 22:29:13 pause-340203 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	2197a8a76c658       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 seconds ago      Running             coredns                   0                   0d9315c65cfc5       coredns-66bc5c9577-r8gxv               kube-system
	0eabb384c878f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   18 seconds ago      Running             coredns                   0                   21916393d08f8       coredns-66bc5c9577-wf6s7               kube-system
	feaa3aef65538       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   29 seconds ago      Running             kube-proxy                0                   2dfbd5b33fc50       kube-proxy-gbvbl                       kube-system
	f804c882f29f9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   29 seconds ago      Running             kindnet-cni               0                   629acf3da4c93       kindnet-wsr7k                          kube-system
	32da299595ba8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   42 seconds ago      Running             kube-controller-manager   0                   4790c36a92f8b       kube-controller-manager-pause-340203   kube-system
	0224b50772a5f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   42 seconds ago      Running             etcd                      0                   bdaa1493b2ec6       etcd-pause-340203                      kube-system
	bf5482b53da28       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   42 seconds ago      Running             kube-apiserver            0                   8e38f50de37bf       kube-apiserver-pause-340203            kube-system
	44743529a3679       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   42 seconds ago      Running             kube-scheduler            0                   b1ec79dd57a70       kube-scheduler-pause-340203            kube-system
	
	
	==> coredns [0eabb384c878f6fbd11d62c0edc5b450fb1a92835676b3246704eb658f4b65aa] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43151 - 25177 "HINFO IN 7320108873066975651.7178136734454026999. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.508165931s
	
	
	==> coredns [2197a8a76c658581a09f830e15a64684aabb398c2af121782e6e249a1c9691fc] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39301 - 39075 "HINFO IN 3943308055213410470.8536799390945331853. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066890381s
	
	
	==> describe nodes <==
	Name:               pause-340203
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-340203
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=pause-340203
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_28_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:28:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-340203
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:29:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:29:03 +0000   Wed, 19 Nov 2025 22:28:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:29:03 +0000   Wed, 19 Nov 2025 22:28:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:29:03 +0000   Wed, 19 Nov 2025 22:28:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:29:03 +0000   Wed, 19 Nov 2025 22:29:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-340203
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                ae45d8dc-a3e2-452e-be1c-f98af2273eaf
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-r8gxv                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 coredns-66bc5c9577-wf6s7                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-pause-340203                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-wsr7k                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-pause-340203             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-pause-340203    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-gbvbl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-pause-340203             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 29s   kube-proxy       
	  Normal  Starting                 37s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s   kubelet          Node pause-340203 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet          Node pause-340203 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet          Node pause-340203 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           33s   node-controller  Node pause-340203 event: Registered Node pause-340203 in Controller
	  Normal  NodeReady                19s   kubelet          Node pause-340203 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [0224b50772a5fad0a10fb5948aaa6aeaae8ab5376e02974888b6e1494c563bce] <==
	{"level":"info","ts":"2025-11-19T22:28:51.947803Z","caller":"traceutil/trace.go:172","msg":"trace[180581103] range","detail":"{range_begin:/registry/pods/kube-system/kindnet-wsr7k; range_end:; response_count:1; response_revision:334; }","duration":"151.528161ms","start":"2025-11-19T22:28:51.796258Z","end":"2025-11-19T22:28:51.947786Z","steps":["trace[180581103] 'agreement among raft nodes before linearized reading'  (duration: 77.877655ms)","trace[180581103] 'range keys from in-memory index tree'  (duration: 73.532148ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:28:51.948701Z","caller":"traceutil/trace.go:172","msg":"trace[939917034] transaction","detail":"{read_only:false; response_revision:335; number_of_response:1; }","duration":"230.663463ms","start":"2025-11-19T22:28:51.718019Z","end":"2025-11-19T22:28:51.948683Z","steps":["trace[939917034] 'process raft request'  (duration: 156.178452ms)","trace[939917034] 'compare'  (duration: 73.552106ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:28:51.948838Z","caller":"traceutil/trace.go:172","msg":"trace[1905994579] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"222.009484ms","start":"2025-11-19T22:28:51.726797Z","end":"2025-11-19T22:28:51.948806Z","steps":["trace[1905994579] 'process raft request'  (duration: 221.969175ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:51.949040Z","caller":"traceutil/trace.go:172","msg":"trace[1169148167] transaction","detail":"{read_only:false; response_revision:337; number_of_response:1; }","duration":"227.698702ms","start":"2025-11-19T22:28:51.721332Z","end":"2025-11-19T22:28:51.949031Z","steps":["trace[1169148167] 'process raft request'  (duration: 227.379285ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:51.949271Z","caller":"traceutil/trace.go:172","msg":"trace[1216816664] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"227.885036ms","start":"2025-11-19T22:28:51.721376Z","end":"2025-11-19T22:28:51.949261Z","steps":["trace[1216816664] 'process raft request'  (duration: 227.364959ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:51.948757Z","caller":"traceutil/trace.go:172","msg":"trace[194192318] transaction","detail":"{read_only:false; response_revision:336; number_of_response:1; }","duration":"227.494205ms","start":"2025-11-19T22:28:51.721249Z","end":"2025-11-19T22:28:51.948743Z","steps":["trace[194192318] 'process raft request'  (duration: 227.371286ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:51.956989Z","caller":"traceutil/trace.go:172","msg":"trace[588021865] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"202.575244ms","start":"2025-11-19T22:28:51.754398Z","end":"2025-11-19T22:28:51.956973Z","steps":["trace[588021865] 'process raft request'  (duration: 202.383261ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:52.110241Z","caller":"traceutil/trace.go:172","msg":"trace[181314465] linearizableReadLoop","detail":"{readStateIndex:355; appliedIndex:355; }","duration":"133.570957ms","start":"2025-11-19T22:28:51.976649Z","end":"2025-11-19T22:28:52.110220Z","steps":["trace[181314465] 'read index received'  (duration: 133.562572ms)","trace[181314465] 'applied index is now lower than readState.Index'  (duration: 7.267µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:28:52.248654Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"271.987165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4299"}
	{"level":"info","ts":"2025-11-19T22:28:52.248781Z","caller":"traceutil/trace.go:172","msg":"trace[164719357] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:344; }","duration":"272.117949ms","start":"2025-11-19T22:28:51.976644Z","end":"2025-11-19T22:28:52.248762Z","steps":["trace[164719357] 'agreement among raft nodes before linearized reading'  (duration: 133.662436ms)","trace[164719357] 'range keys from in-memory index tree'  (duration: 138.234813ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:28:52.248955Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.481711ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356760491865959 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-gbvbl.187988f8ea17deeb\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-gbvbl.187988f8ea17deeb\" value_size:611 lease:6414984723637089852 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:28:52.249072Z","caller":"traceutil/trace.go:172","msg":"trace[1723513530] transaction","detail":"{read_only:false; response_revision:345; number_of_response:1; }","duration":"278.161646ms","start":"2025-11-19T22:28:51.970896Z","end":"2025-11-19T22:28:52.249058Z","steps":["trace[1723513530] 'process raft request'  (duration: 139.355216ms)","trace[1723513530] 'compare'  (duration: 138.387422ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:28:52.356960Z","caller":"traceutil/trace.go:172","msg":"trace[382219547] linearizableReadLoop","detail":"{readStateIndex:356; appliedIndex:356; }","duration":"246.649428ms","start":"2025-11-19T22:28:52.110290Z","end":"2025-11-19T22:28:52.356940Z","steps":["trace[382219547] 'read index received'  (duration: 246.640506ms)","trace[382219547] 'applied index is now lower than readState.Index'  (duration: 7.428µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:28:52.366941Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"388.627271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-gbvbl\" limit:1 ","response":"range_response_count:1 size:3429"}
	{"level":"info","ts":"2025-11-19T22:28:52.366999Z","caller":"traceutil/trace.go:172","msg":"trace[1769053483] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-gbvbl; range_end:; response_count:1; response_revision:345; }","duration":"388.699044ms","start":"2025-11-19T22:28:51.978288Z","end":"2025-11-19T22:28:52.366987Z","steps":["trace[1769053483] 'agreement among raft nodes before linearized reading'  (duration: 378.721408ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:28:52.367030Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:28:51.978276Z","time spent":"388.745929ms","remote":"127.0.0.1:57470","response type":"/etcdserverpb.KV/Range","request count":0,"request size":47,"response count":1,"response size":3452,"request content":"key:\"/registry/pods/kube-system/kube-proxy-gbvbl\" limit:1 "}
	{"level":"warn","ts":"2025-11-19T22:28:52.366944Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"320.661376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-11-19T22:28:52.367067Z","caller":"traceutil/trace.go:172","msg":"trace[907427385] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"395.844194ms","start":"2025-11-19T22:28:51.971210Z","end":"2025-11-19T22:28:52.367054Z","steps":["trace[907427385] 'process raft request'  (duration: 385.829833ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:28:52.367084Z","caller":"traceutil/trace.go:172","msg":"trace[1581404332] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:345; }","duration":"320.815608ms","start":"2025-11-19T22:28:52.046254Z","end":"2025-11-19T22:28:52.367070Z","steps":["trace[1581404332] 'agreement among raft nodes before linearized reading'  (duration: 310.770311ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:28:52.367122Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:28:52.046240Z","time spent":"320.869318ms","remote":"127.0.0.1:57554","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":208,"request content":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 "}
	{"level":"warn","ts":"2025-11-19T22:28:52.367154Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:28:51.971194Z","time spent":"395.899148ms","remote":"127.0.0.1:58528","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4323,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:300 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4274 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-11-19T22:28:52.366968Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"380.406129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-340203\" limit:1 ","response":"range_response_count:1 size:5559"}
	{"level":"info","ts":"2025-11-19T22:28:52.367199Z","caller":"traceutil/trace.go:172","msg":"trace[318206098] range","detail":"{range_begin:/registry/minions/pause-340203; range_end:; response_count:1; response_revision:345; }","duration":"380.621796ms","start":"2025-11-19T22:28:51.986555Z","end":"2025-11-19T22:28:52.367176Z","steps":["trace[318206098] 'agreement among raft nodes before linearized reading'  (duration: 370.470395ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:28:52.367279Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-19T22:28:51.986541Z","time spent":"380.723106ms","remote":"127.0.0.1:57430","response type":"/etcdserverpb.KV/Range","request count":0,"request size":34,"response count":1,"response size":5582,"request content":"key:\"/registry/minions/pause-340203\" limit:1 "}
	{"level":"info","ts":"2025-11-19T22:28:52.369725Z","caller":"traceutil/trace.go:172","msg":"trace[1166432015] transaction","detail":"{read_only:false; number_of_response:1; response_revision:346; }","duration":"117.074495ms","start":"2025-11-19T22:28:52.252629Z","end":"2025-11-19T22:28:52.369704Z","steps":["trace[1166432015] 'process raft request'  (duration: 117.013309ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:29:23 up  1:11,  0 user,  load average: 4.33, 1.98, 1.29
	Linux pause-340203 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f804c882f29f9621335011914f58804da767a84d82dc0993fa81751604c82124] <==
	I1119 22:28:52.899444       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:28:52.915092       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:28:52.915243       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:28:52.915264       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:28:52.915286       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:28:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:28:53.099997       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:28:53.100058       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:28:53.100071       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:28:53.215479       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:28:53.515133       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:28:53.515209       1 metrics.go:72] Registering metrics
	I1119 22:28:53.515266       1 controller.go:711] "Syncing nftables rules"
	I1119 22:29:03.100516       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:29:03.100573       1 main.go:301] handling current node
	I1119 22:29:13.107429       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:29:13.107454       1 main.go:301] handling current node
	I1119 22:29:23.107094       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:29:23.107125       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bf5482b53da28621e00f7bd15befc3b9f6a1c547a06579444ce1fd6e26181553] <==
	I1119 22:28:42.973469       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:28:42.975561       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:28:42.981055       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:28:42.983076       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:28:42.989731       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:28:42.990052       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:28:43.003943       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:28:43.876385       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:28:43.880383       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:28:43.880452       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:28:44.343082       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:28:44.386187       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:28:44.478158       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:28:44.484657       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:28:44.485721       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:28:44.490010       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:28:44.896472       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:28:45.243357       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:28:45.253277       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:28:45.261192       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:28:50.246788       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:28:50.827140       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:28:50.868525       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:28:51.090802       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:28:51.090802       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [32da299595ba8ed38efc7cf713977e86ee07eb1fe3a0ac7c51f1f34b8a6e132e] <==
	I1119 22:28:49.893573       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:28:49.893667       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-340203"
	I1119 22:28:49.893722       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:28:49.894825       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 22:28:49.894863       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:28:49.894914       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:28:49.894935       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:28:49.894981       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:28:49.894947       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:28:49.894949       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:28:49.895313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:28:49.895525       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:28:49.895593       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:28:49.896438       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:28:49.896451       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:28:49.898698       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:28:49.898762       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:28:49.898806       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:28:49.898833       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:28:49.898841       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:28:49.899100       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:28:49.910065       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:28:49.912879       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-340203" podCIDRs=["10.244.0.0/24"]
	I1119 22:28:49.918917       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:29:04.896676       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [feaa3aef65538a4f7c257c3c566ce1333f9b016219ef6f1d794101b05ece0c08] <==
	I1119 22:28:52.774559       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:28:52.832889       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:28:52.933101       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:28:52.933145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:28:52.933249       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:28:52.953130       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:28:52.953199       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:28:52.961646       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:28:52.962070       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:28:52.962108       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:28:52.964058       1 config.go:200] "Starting service config controller"
	I1119 22:28:52.964074       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:28:52.964104       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:28:52.964110       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:28:52.964127       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:28:52.965018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:28:52.964588       1 config.go:309] "Starting node config controller"
	I1119 22:28:52.965045       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:28:52.965050       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:28:53.065020       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:28:53.065038       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:28:53.065068       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [44743529a36798e137bd2f9277293ceee4d3643bfa7cb9036938d2df1c3e52f5] <==
	E1119 22:28:42.949735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:28:42.949807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:28:42.949851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:28:42.949905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:28:42.949996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:28:42.950001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:28:42.950020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:28:42.950060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:28:42.950111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:28:42.950149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:28:42.950168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:28:42.950332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:28:43.846358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:28:43.850504       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:28:43.864748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:28:43.889009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:28:43.897019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:28:43.928294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:28:43.959943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:28:43.988316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:28:44.103093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:28:44.118391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:28:44.143614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:28:44.184976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1119 22:28:44.545574       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:29:10 pause-340203 kubelet[1297]: W1119 22:29:10.141864    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.141997    1297 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.142043    1297 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.190216    1297 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.190289    1297 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:10 pause-340203 kubelet[1297]: E1119 22:29:10.190308    1297 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:10 pause-340203 kubelet[1297]: W1119 22:29:10.242782    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:10 pause-340203 kubelet[1297]: W1119 22:29:10.382137    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:10 pause-340203 kubelet[1297]: W1119 22:29:10.658178    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.116530    1297 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.116662    1297 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.116695    1297 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.116714    1297 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: W1119 22:29:11.135727    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.191347    1297 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.191399    1297 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: E1119 22:29:11.191410    1297 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:11 pause-340203 kubelet[1297]: W1119 22:29:11.796526    1297 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 19 22:29:12 pause-340203 kubelet[1297]: E1119 22:29:12.192473    1297 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 19 22:29:12 pause-340203 kubelet[1297]: E1119 22:29:12.192530    1297 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:12 pause-340203 kubelet[1297]: E1119 22:29:12.192546    1297 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 19 22:29:17 pause-340203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:29:17 pause-340203 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:29:17 pause-340203 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:29:17 pause-340203 systemd[1]: kubelet.service: Consumed 1.309s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-340203 -n pause-340203
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-340203 -n pause-340203: exit status 2 (397.490149ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-340203 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (229.800844ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:32:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-680619 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-680619 describe deploy/metrics-server -n kube-system: exit status 1 (55.67129ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-680619 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-680619
helpers_test.go:243: (dbg) docker inspect old-k8s-version-680619:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919",
	        "Created": "2025-11-19T22:31:10.323294154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 233161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:31:10.355034239Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/hostname",
	        "HostsPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/hosts",
	        "LogPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919-json.log",
	        "Name": "/old-k8s-version-680619",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-680619:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-680619",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919",
	                "LowerDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-680619",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-680619/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-680619",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-680619",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-680619",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "321bb1d8d4c4b3454dc61bd97f49eea7b9b9e9b853c58c06ff5979e6bd79055e",
	            "SandboxKey": "/var/run/docker/netns/321bb1d8d4c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-680619": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d9a9064074d1b313e6d7afbded8c0b7d9aaeb41b178a1f248c1547e69e77bbc",
	                    "EndpointID": "8d484443a5a6a8329bdb2dd543dda1864832339262572bc05504827d3d138101",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:9a:5f:f2:bf:3f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-680619",
	                        "08365271d4a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680619 -n old-k8s-version-680619
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-680619 logs -n 25
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-654834 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p cilium-654834 sudo containerd config dump                                                                                                                                                                                                  │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p cilium-654834 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p cilium-654834 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p cilium-654834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p cilium-654834 sudo crio config                                                                                                                                                                                                             │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ delete  │ -p cilium-654834                                                                                                                                                                                                                              │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p missing-upgrade-015670 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-015670    │ jenkins │ v1.32.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ delete  │ -p running-upgrade-083468                                                                                                                                                                                                                     │ running-upgrade-083468    │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p missing-upgrade-015670 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-015670    │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ stop    │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p NoKubernetes-662839 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ stop    │ -p kubernetes-upgrade-801704                                                                                                                                                                                                                  │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ delete  │ -p missing-upgrade-015670                                                                                                                                                                                                                     │ missing-upgrade-015670    │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:31:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:31:24.627322  236895 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:31:24.627618  236895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:31:24.627630  236895 out.go:374] Setting ErrFile to fd 2...
	I1119 22:31:24.627635  236895 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:31:24.627839  236895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:31:24.628367  236895 out.go:368] Setting JSON to false
	I1119 22:31:24.629718  236895 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4433,"bootTime":1763587052,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:31:24.629836  236895 start.go:143] virtualization: kvm guest
	I1119 22:31:24.631761  236895 out.go:179] * [no-preload-178067] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:31:24.633099  236895 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:31:24.633103  236895 notify.go:221] Checking for updates...
	I1119 22:31:24.634515  236895 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:31:24.635964  236895 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:31:24.637787  236895 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:31:24.639011  236895 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:31:24.640275  236895 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:31:24.641660  236895 config.go:182] Loaded profile config "cert-expiration-855818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:31:24.641826  236895 config.go:182] Loaded profile config "kubernetes-upgrade-801704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:31:24.641980  236895 config.go:182] Loaded profile config "old-k8s-version-680619": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:31:24.642104  236895 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:31:24.672393  236895 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:31:24.672551  236895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:31:24.748571  236895 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:31:24.737321524 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:31:24.748693  236895 docker.go:319] overlay module found
	I1119 22:31:24.750945  236895 out.go:179] * Using the docker driver based on user configuration
	I1119 22:31:24.751925  236895 start.go:309] selected driver: docker
	I1119 22:31:24.751944  236895 start.go:930] validating driver "docker" against <nil>
	I1119 22:31:24.751956  236895 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:31:24.752648  236895 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:31:24.809310  236895 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:31:24.800193094 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:31:24.809454  236895 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:31:24.809714  236895 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:31:24.811199  236895 out.go:179] * Using Docker driver with root privileges
	I1119 22:31:24.812187  236895 cni.go:84] Creating CNI manager for ""
	I1119 22:31:24.812241  236895 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:31:24.812259  236895 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:31:24.812322  236895 start.go:353] cluster config:
	{Name:no-preload-178067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-178067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:31:24.813711  236895 out.go:179] * Starting "no-preload-178067" primary control-plane node in "no-preload-178067" cluster
	I1119 22:31:24.814810  236895 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:31:24.816414  236895 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:31:24.817312  236895 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:31:24.817398  236895 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:31:24.817433  236895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/config.json ...
	I1119 22:31:24.817467  236895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/config.json: {Name:mkdc3549a7ae58bb50231841fb8033ce7d1d1980 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:24.817563  236895 cache.go:107] acquiring lock: {Name:mke6fc07bd79e2ebee1ebe8b461aff72b64f2cc5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.817660  236895 cache.go:115] /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1119 22:31:24.817642  236895 cache.go:107] acquiring lock: {Name:mk91b7300ebd19f94f20a345d632ec4426c94d72 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.817672  236895 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 121.42µs
	I1119 22:31:24.817687  236895 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1119 22:31:24.817654  236895 cache.go:107] acquiring lock: {Name:mk1c92371f2c57c84b42324b21b11179eb48a811 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.817687  236895 cache.go:107] acquiring lock: {Name:mke18063c69f14b10082abd08bf8bde71aaa8ad5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.817748  236895 cache.go:107] acquiring lock: {Name:mk2a7356aebe6c8542f8091ac9c4da6e51977dac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.817785  236895 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:24.817806  236895 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:24.817863  236895 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:24.817948  236895 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:24.818040  236895 cache.go:107] acquiring lock: {Name:mk7a900f277ad30c98b1cfda37d22d541a372635 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.818043  236895 cache.go:107] acquiring lock: {Name:mke58d0859bf10f0562ff9c9b5f24457d6b7a84c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.818095  236895 cache.go:107] acquiring lock: {Name:mk56cb178c3a7527bd681b164b0f3a2a1f0e9cee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.818124  236895 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:24.818165  236895 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:24.818782  236895 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:31:24.819495  236895 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:24.819506  236895 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:24.819523  236895 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:24.819541  236895 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:24.819649  236895 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:24.820120  236895 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:24.820440  236895 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:31:24.841153  236895 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:31:24.841171  236895 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:31:24.841186  236895 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:31:24.841211  236895 start.go:360] acquireMachinesLock for no-preload-178067: {Name:mkac1dd7480653e08493d42d4ced2bfbfbeaae21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:31:24.841290  236895 start.go:364] duration metric: took 66.602µs to acquireMachinesLock for "no-preload-178067"
	I1119 22:31:24.841311  236895 start.go:93] Provisioning new machine with config: &{Name:no-preload-178067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-178067 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:31:24.841367  236895 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:31:24.449729  230837 out.go:252]   - Configuring RBAC rules ...
	I1119 22:31:24.449930  230837 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:31:24.453620  230837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:31:24.459333  230837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:31:24.462570  230837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:31:24.465001  230837 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:31:24.468297  230837 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:31:24.477515  230837 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:31:24.655754  230837 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:31:24.858208  230837 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:31:24.859465  230837 kubeadm.go:319] 
	I1119 22:31:24.859554  230837 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:31:24.859581  230837 kubeadm.go:319] 
	I1119 22:31:24.859712  230837 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:31:24.859731  230837 kubeadm.go:319] 
	I1119 22:31:24.859769  230837 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:31:24.859875  230837 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:31:24.859944  230837 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:31:24.859954  230837 kubeadm.go:319] 
	I1119 22:31:24.860026  230837 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:31:24.860034  230837 kubeadm.go:319] 
	I1119 22:31:24.860094  230837 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:31:24.860105  230837 kubeadm.go:319] 
	I1119 22:31:24.860175  230837 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:31:24.860300  230837 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:31:24.860416  230837 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:31:24.860433  230837 kubeadm.go:319] 
	I1119 22:31:24.860577  230837 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:31:24.860684  230837 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:31:24.860694  230837 kubeadm.go:319] 
	I1119 22:31:24.860808  230837 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qh11tz.m64c7kissws3fzq4 \
	I1119 22:31:24.860969  230837 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b \
	I1119 22:31:24.861005  230837 kubeadm.go:319] 	--control-plane 
	I1119 22:31:24.861015  230837 kubeadm.go:319] 
	I1119 22:31:24.861128  230837 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:31:24.861136  230837 kubeadm.go:319] 
	I1119 22:31:24.861245  230837 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qh11tz.m64c7kissws3fzq4 \
	I1119 22:31:24.861408  230837 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b 
	I1119 22:31:24.863268  230837 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:31:24.863434  230837 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:31:24.863462  230837 cni.go:84] Creating CNI manager for ""
	I1119 22:31:24.863470  230837 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:31:24.864899  230837 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:31:24.865996  230837 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:31:24.870470  230837 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1119 22:31:24.870489  230837 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:31:24.884529  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:31:22.214262  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:31:22.214294  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:24.842684  236895 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:31:24.842927  236895 start.go:159] libmachine.API.Create for "no-preload-178067" (driver="docker")
	I1119 22:31:24.842955  236895 client.go:173] LocalClient.Create starting
	I1119 22:31:24.843009  236895 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem
	I1119 22:31:24.843036  236895 main.go:143] libmachine: Decoding PEM data...
	I1119 22:31:24.843049  236895 main.go:143] libmachine: Parsing certificate...
	I1119 22:31:24.843095  236895 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem
	I1119 22:31:24.843113  236895 main.go:143] libmachine: Decoding PEM data...
	I1119 22:31:24.843122  236895 main.go:143] libmachine: Parsing certificate...
	I1119 22:31:24.843396  236895 cli_runner.go:164] Run: docker network inspect no-preload-178067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:31:24.862949  236895 cli_runner.go:211] docker network inspect no-preload-178067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:31:24.863018  236895 network_create.go:284] running [docker network inspect no-preload-178067] to gather additional debugging logs...
	I1119 22:31:24.863042  236895 cli_runner.go:164] Run: docker network inspect no-preload-178067
	W1119 22:31:24.881117  236895 cli_runner.go:211] docker network inspect no-preload-178067 returned with exit code 1
	I1119 22:31:24.881139  236895 network_create.go:287] error running [docker network inspect no-preload-178067]: docker network inspect no-preload-178067: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-178067 not found
	I1119 22:31:24.881147  236895 network_create.go:289] output of [docker network inspect no-preload-178067]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-178067 not found
	
	** /stderr **
	I1119 22:31:24.881208  236895 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:31:24.899391  236895 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cde0f356bd10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b5:fa:ba:e0:a6} reservation:<nil>}
	I1119 22:31:24.900028  236895 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-47fb5ce24a02 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:30:91:0e:d6:d9} reservation:<nil>}
	I1119 22:31:24.900656  236895 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2592199ffac9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:9b:dd:65:07:28} reservation:<nil>}
	I1119 22:31:24.901068  236895 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7d9a9064074d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:c1:e4:50:35:aa} reservation:<nil>}
	I1119 22:31:24.901527  236895 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-a56885aa2b67 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:2e:f4:12:f0:2d:0e} reservation:<nil>}
	I1119 22:31:24.901872  236895 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-fc673f34e64f IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:76:61:80:89:89:70} reservation:<nil>}
	I1119 22:31:24.902416  236895 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e02480}
	I1119 22:31:24.902434  236895 network_create.go:124] attempt to create docker network no-preload-178067 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1119 22:31:24.902467  236895 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-178067 no-preload-178067
	I1119 22:31:24.956485  236895 network_create.go:108] docker network no-preload-178067 192.168.103.0/24 created
	I1119 22:31:24.956517  236895 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-178067" container
	I1119 22:31:24.956601  236895 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:31:24.987249  236895 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:31:24.987380  236895 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:31:24.987640  236895 cli_runner.go:164] Run: docker volume create no-preload-178067 --label name.minikube.sigs.k8s.io=no-preload-178067 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:31:24.991743  236895 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:31:25.004196  236895 oci.go:103] Successfully created a docker volume no-preload-178067
	I1119 22:31:25.004280  236895 cli_runner.go:164] Run: docker run --rm --name no-preload-178067-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-178067 --entrypoint /usr/bin/test -v no-preload-178067:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:31:25.004905  236895 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:31:25.010996  236895 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:31:25.013156  236895 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:31:25.038826  236895 cache.go:162] opening:  /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 22:31:25.123563  236895 cache.go:157] /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1119 22:31:25.123594  236895 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 305.501676ms
	I1119 22:31:25.123612  236895 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1119 22:31:25.340446  236895 cache.go:157] /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1119 22:31:25.340470  236895 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 522.836483ms
	I1119 22:31:25.340480  236895 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1119 22:31:25.461810  236895 oci.go:107] Successfully prepared a docker volume no-preload-178067
	I1119 22:31:25.461866  236895 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1119 22:31:25.461951  236895 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:31:25.461981  236895 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:31:25.462022  236895 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:31:25.526855  236895 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-178067 --name no-preload-178067 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-178067 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-178067 --network no-preload-178067 --ip 192.168.103.2 --volume no-preload-178067:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:31:25.865983  236895 cli_runner.go:164] Run: docker container inspect no-preload-178067 --format={{.State.Running}}
	I1119 22:31:25.884564  236895 cli_runner.go:164] Run: docker container inspect no-preload-178067 --format={{.State.Status}}
	I1119 22:31:25.903781  236895 cli_runner.go:164] Run: docker exec no-preload-178067 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:31:25.949100  236895 oci.go:144] the created container "no-preload-178067" has a running status.
	I1119 22:31:25.949126  236895 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa...
	I1119 22:31:26.392156  236895 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:31:26.412894  236895 cache.go:157] /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1119 22:31:26.412926  236895 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.594889452s
	I1119 22:31:26.412986  236895 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1119 22:31:26.424435  236895 cli_runner.go:164] Run: docker container inspect no-preload-178067 --format={{.State.Status}}
	I1119 22:31:26.440455  236895 cache.go:157] /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1119 22:31:26.440480  236895 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.622793796s
	I1119 22:31:26.440495  236895 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1119 22:31:26.448397  236895 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:31:26.448419  236895 kic_runner.go:114] Args: [docker exec --privileged no-preload-178067 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:31:26.506088  236895 cli_runner.go:164] Run: docker container inspect no-preload-178067 --format={{.State.Status}}
	I1119 22:31:26.526068  236895 cache.go:157] /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1119 22:31:26.526097  236895 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.708059642s
	I1119 22:31:26.526115  236895 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1119 22:31:26.528804  236895 machine.go:94] provisionDockerMachine start ...
	I1119 22:31:26.528916  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:26.548169  236895 main.go:143] libmachine: Using SSH client type: native
	I1119 22:31:26.548411  236895 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1119 22:31:26.548426  236895 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:31:26.682629  236895 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178067
	
	I1119 22:31:26.682657  236895 ubuntu.go:182] provisioning hostname "no-preload-178067"
	I1119 22:31:26.682708  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:26.700696  236895 main.go:143] libmachine: Using SSH client type: native
	I1119 22:31:26.700911  236895 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1119 22:31:26.700924  236895 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-178067 && echo "no-preload-178067" | sudo tee /etc/hostname
	I1119 22:31:26.842718  236895 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-178067
	
	I1119 22:31:26.842801  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:26.864300  236895 main.go:143] libmachine: Using SSH client type: native
	I1119 22:31:26.864532  236895 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1119 22:31:26.864550  236895 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-178067' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-178067/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-178067' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:31:26.878139  236895 cache.go:157] /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1119 22:31:26.878165  236895 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.06046252s
	I1119 22:31:26.878179  236895 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1119 22:31:26.908053  236895 cache.go:157] /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1119 22:31:26.908078  236895 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 2.090428378s
	I1119 22:31:26.908092  236895 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1119 22:31:26.908106  236895 cache.go:87] Successfully saved all images to host disk.
	I1119 22:31:26.990962  236895 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:31:26.990986  236895 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:31:26.991004  236895 ubuntu.go:190] setting up certificates
	I1119 22:31:26.991013  236895 provision.go:84] configureAuth start
	I1119 22:31:26.991057  236895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178067
	I1119 22:31:27.009119  236895 provision.go:143] copyHostCerts
	I1119 22:31:27.009173  236895 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:31:27.009181  236895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:31:27.009245  236895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:31:27.009325  236895 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:31:27.009333  236895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:31:27.009358  236895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:31:27.009415  236895 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:31:27.009422  236895 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:31:27.009447  236895 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:31:27.009500  236895 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.no-preload-178067 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-178067]
	I1119 22:31:27.071959  236895 provision.go:177] copyRemoteCerts
	I1119 22:31:27.072025  236895 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:31:27.072060  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:27.089491  236895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa Username:docker}
	I1119 22:31:27.181284  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:31:27.200277  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:31:27.217326  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1119 22:31:27.234205  236895 provision.go:87] duration metric: took 243.182837ms to configureAuth
	I1119 22:31:27.234229  236895 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:31:27.234377  236895 config.go:182] Loaded profile config "no-preload-178067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:31:27.234464  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:27.252712  236895 main.go:143] libmachine: Using SSH client type: native
	I1119 22:31:27.252976  236895 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33058 <nil> <nil>}
	I1119 22:31:27.253002  236895 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:31:27.507402  236895 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:31:27.507428  236895 machine.go:97] duration metric: took 978.584281ms to provisionDockerMachine
	I1119 22:31:27.507440  236895 client.go:176] duration metric: took 2.664476611s to LocalClient.Create
	I1119 22:31:27.507462  236895 start.go:167] duration metric: took 2.664534841s to libmachine.API.Create "no-preload-178067"
	I1119 22:31:27.507471  236895 start.go:293] postStartSetup for "no-preload-178067" (driver="docker")
	I1119 22:31:27.507491  236895 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:31:27.507548  236895 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:31:27.507604  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:27.526130  236895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa Username:docker}
	I1119 22:31:27.618935  236895 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:31:27.622243  236895 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:31:27.622266  236895 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:31:27.622276  236895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:31:27.622321  236895 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:31:27.622387  236895 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:31:27.622479  236895 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:31:27.630581  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:31:27.649038  236895 start.go:296] duration metric: took 141.552204ms for postStartSetup
	I1119 22:31:27.649348  236895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178067
	I1119 22:31:27.667087  236895 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/config.json ...
	I1119 22:31:27.667335  236895 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:31:27.667387  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:27.683885  236895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa Username:docker}
	I1119 22:31:27.774894  236895 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:31:27.779700  236895 start.go:128] duration metric: took 2.938316269s to createHost
	I1119 22:31:27.779724  236895 start.go:83] releasing machines lock for "no-preload-178067", held for 2.938423774s
	I1119 22:31:27.779788  236895 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178067
	I1119 22:31:27.797522  236895 ssh_runner.go:195] Run: cat /version.json
	I1119 22:31:27.797561  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:27.797618  236895 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:31:27.797690  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:27.815502  236895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa Username:docker}
	I1119 22:31:27.815757  236895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa Username:docker}
	I1119 22:31:27.903592  236895 ssh_runner.go:195] Run: systemctl --version
	I1119 22:31:27.956989  236895 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:31:27.987145  236895 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:31:27.991435  236895 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:31:27.991505  236895 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:31:28.016358  236895 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:31:28.016381  236895 start.go:496] detecting cgroup driver to use...
	I1119 22:31:28.016411  236895 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:31:28.016466  236895 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:31:28.032070  236895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:31:28.043276  236895 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:31:28.043322  236895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:31:28.058173  236895 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:31:28.074633  236895 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:31:28.155724  236895 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:31:28.247444  236895 docker.go:234] disabling docker service ...
	I1119 22:31:28.247528  236895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:31:28.267135  236895 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:31:28.279748  236895 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:31:28.358717  236895 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:31:28.444648  236895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:31:28.456384  236895 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:31:28.469402  236895 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:31:28.469446  236895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:31:28.478682  236895 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:31:28.478730  236895 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:31:28.486550  236895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:31:28.494115  236895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:31:28.501987  236895 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:31:28.509784  236895 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:31:28.517977  236895 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:31:28.530498  236895 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:31:28.538564  236895 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:31:28.545223  236895 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:31:28.552062  236895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:31:28.631898  236895 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:31:29.063631  236895 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:31:29.063700  236895 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:31:29.067572  236895 start.go:564] Will wait 60s for crictl version
	I1119 22:31:29.067625  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.071110  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:31:29.094598  236895 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:31:29.094673  236895 ssh_runner.go:195] Run: crio --version
	I1119 22:31:29.120922  236895 ssh_runner.go:195] Run: crio --version
	I1119 22:31:29.150559  236895 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:31:29.151749  236895 cli_runner.go:164] Run: docker network inspect no-preload-178067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:31:29.169477  236895 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:31:29.173328  236895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:31:29.183455  236895 kubeadm.go:884] updating cluster {Name:no-preload-178067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-178067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:31:29.183554  236895 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:31:29.183606  236895 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:31:29.206881  236895 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1119 22:31:29.206907  236895 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1119 22:31:29.206952  236895 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:29.206965  236895 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:29.206977  236895 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:29.206988  236895 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:29.207024  236895 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:29.207038  236895 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1119 22:31:29.207042  236895 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:29.207026  236895 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:29.208220  236895 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:29.208229  236895 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:29.208229  236895 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:29.208225  236895 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:29.208249  236895 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:29.208259  236895 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1119 22:31:29.208292  236895 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:29.208399  236895 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:29.347753  236895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:29.353465  236895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:29.369195  236895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1119 22:31:29.373526  236895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:29.373778  236895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:29.384682  236895 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1119 22:31:29.384752  236895 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:29.384865  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.385672  236895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:29.391979  236895 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1119 22:31:29.392022  236895 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:29.392062  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.413379  236895 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1119 22:31:29.413427  236895 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1119 22:31:29.413470  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.414006  236895 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1119 22:31:29.414038  236895 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:29.414074  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.415854  236895 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1119 22:31:29.415889  236895 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:29.415898  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:29.415929  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.416921  236895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:29.425300  236895 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1119 22:31:29.425336  236895 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:29.425369  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.425404  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:31:29.425471  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:29.425490  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:29.445939  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:29.446002  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:29.462145  236895 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1119 22:31:29.462188  236895 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:29.462220  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:31:29.462230  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.462250  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:29.462330  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:29.462342  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:29.481136  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:29.481196  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:29.481228  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1119 22:31:29.494545  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1119 22:31:29.498034  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1119 22:31:29.500912  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:29.500977  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1119 22:31:29.517553  236895 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1119 22:31:29.517666  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:31:29.517834  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:29.517934  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1119 22:31:29.526722  236895 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1119 22:31:29.526842  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:31:29.530904  236895 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1119 22:31:29.530997  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:31:29.535201  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1119 22:31:29.536951  236895 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1119 22:31:29.536981  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1119 22:31:29.537027  236895 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1119 22:31:29.537109  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1119 22:31:29.563122  236895 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1119 22:31:29.563150  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1119 22:31:29.563159  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1119 22:31:29.563176  236895 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1119 22:31:29.563149  236895 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1119 22:31:29.563199  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1119 22:31:29.563288  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:31:29.577803  236895 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1119 22:31:29.577843  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1119 22:31:29.577871  236895 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1119 22:31:29.577954  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:31:25.630115  230837 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:31:25.630197  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:25.630214  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-680619 minikube.k8s.io/updated_at=2025_11_19T22_31_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=old-k8s-version-680619 minikube.k8s.io/primary=true
	I1119 22:31:25.639778  230837 ops.go:34] apiserver oom_adj: -16
	I1119 22:31:25.706228  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:26.207121  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:26.706517  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:27.206990  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:27.706221  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:28.207001  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:28.706618  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:29.206999  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:29.706367  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:30.207120  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:27.214884  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:31:27.214940  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:29.636204  236895 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1119 22:31:29.636248  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1119 22:31:29.636252  236895 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1119 22:31:29.636274  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1119 22:31:29.636381  236895 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1119 22:31:29.636421  236895 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:29.636464  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:31:29.650907  236895 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1119 22:31:29.650963  236895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1119 22:31:29.720987  236895 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1119 22:31:29.720993  236895 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1119 22:31:29.721036  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1119 22:31:29.721047  236895 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:29.721114  236895 ssh_runner.go:195] Run: which crictl
	I1119 22:31:29.863884  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:29.864272  236895 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1119 22:31:29.905198  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:29.924509  236895 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:31:29.924591  236895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1119 22:31:29.940779  236895 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:31.027177  236895 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.102559879s)
	I1119 22:31:31.027201  236895 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1119 22:31:31.027216  236895 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:31:31.027249  236895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1119 22:31:31.027247  236895 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.086434818s)
	I1119 22:31:31.027293  236895 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1119 22:31:31.027381  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:31:32.157495  236895 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.13022141s)
	I1119 22:31:32.157527  236895 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1119 22:31:32.157552  236895 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.130149672s)
	I1119 22:31:32.157557  236895 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:31:32.157573  236895 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1119 22:31:32.157595  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1119 22:31:32.157617  236895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1119 22:31:33.438459  236895 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.28081714s)
	I1119 22:31:33.438482  236895 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1119 22:31:33.438515  236895 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:31:33.438566  236895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1119 22:31:30.706278  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:31.206977  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:31.706583  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:32.206641  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:32.706547  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:33.207016  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:33.706894  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:34.207284  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:34.706327  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:35.206926  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:32.167610  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": read tcp 192.168.94.1:52328->192.168.94.2:8443: read: connection reset by peer
	I1119 22:31:32.167654  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:32.168229  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:32.212505  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:32.212997  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:32.712651  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:35.706640  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:36.206338  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:36.706332  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:37.206682  230837 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:37.274578  230837 kubeadm.go:1114] duration metric: took 11.644433004s to wait for elevateKubeSystemPrivileges
	I1119 22:31:37.274616  230837 kubeadm.go:403] duration metric: took 21.642986967s to StartCluster
	I1119 22:31:37.274636  230837 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:37.274708  230837 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:31:37.276311  230837 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:37.276536  230837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:31:37.276549  230837 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:31:37.276628  230837 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:31:37.276721  230837 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-680619"
	I1119 22:31:37.276730  230837 config.go:182] Loaded profile config "old-k8s-version-680619": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:31:37.276741  230837 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-680619"
	I1119 22:31:37.276752  230837 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-680619"
	I1119 22:31:37.276774  230837 host.go:66] Checking if "old-k8s-version-680619" exists ...
	I1119 22:31:37.276788  230837 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-680619"
	I1119 22:31:37.277190  230837 cli_runner.go:164] Run: docker container inspect old-k8s-version-680619 --format={{.State.Status}}
	I1119 22:31:37.277361  230837 cli_runner.go:164] Run: docker container inspect old-k8s-version-680619 --format={{.State.Status}}
	I1119 22:31:37.278637  230837 out.go:179] * Verifying Kubernetes components...
	I1119 22:31:37.279698  230837 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:31:37.299929  230837 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-680619"
	I1119 22:31:37.299973  230837 host.go:66] Checking if "old-k8s-version-680619" exists ...
	I1119 22:31:37.300482  230837 cli_runner.go:164] Run: docker container inspect old-k8s-version-680619 --format={{.State.Status}}
	I1119 22:31:37.301661  230837 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:37.302746  230837 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:31:37.302767  230837 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:31:37.302838  230837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-680619
	I1119 22:31:37.332947  230837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/old-k8s-version-680619/id_rsa Username:docker}
	I1119 22:31:37.334919  230837 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:31:37.334939  230837 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:31:37.334992  230837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-680619
	I1119 22:31:37.361707  230837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/old-k8s-version-680619/id_rsa Username:docker}
	I1119 22:31:37.384766  230837 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:31:37.423551  230837 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:31:37.452055  230837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:31:37.469450  230837 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:31:37.611706  230837 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1119 22:31:37.613137  230837 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-680619" to be "Ready" ...
	I1119 22:31:37.868437  230837 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:31:34.993676  236895 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.555085204s)
	I1119 22:31:34.993711  236895 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1119 22:31:34.993737  236895 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:31:34.993779  236895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1119 22:31:36.326442  236895 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.332634238s)
	I1119 22:31:36.326483  236895 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1119 22:31:36.326512  236895 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:31:36.326558  236895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1119 22:31:37.869783  230837 addons.go:515] duration metric: took 593.139553ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:31:38.117272  230837 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-680619" context rescaled to 1 replicas
	W1119 22:31:39.616939  230837 node_ready.go:57] node "old-k8s-version-680619" has "Ready":"False" status (will retry)
	I1119 22:31:37.712898  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:31:37.712960  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:40.096091  236895 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.769511749s)
	I1119 22:31:40.096116  236895 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1119 22:31:40.096148  236895 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:31:40.096200  236895 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1119 22:31:40.653918  236895 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21918-9335/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1119 22:31:40.653959  236895 cache_images.go:125] Successfully loaded all cached images
	I1119 22:31:40.653968  236895 cache_images.go:94] duration metric: took 11.447048888s to LoadCachedImages
	I1119 22:31:40.653982  236895 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 22:31:40.654075  236895 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-178067 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-178067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:31:40.654141  236895 ssh_runner.go:195] Run: crio config
	I1119 22:31:40.698620  236895 cni.go:84] Creating CNI manager for ""
	I1119 22:31:40.698640  236895 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:31:40.698654  236895 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:31:40.698678  236895 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-178067 NodeName:no-preload-178067 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:31:40.698810  236895 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-178067"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:31:40.698898  236895 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:31:40.706855  236895 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1119 22:31:40.706913  236895 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1119 22:31:40.714699  236895 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1119 22:31:40.714754  236895 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1119 22:31:40.714777  236895 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1119 22:31:40.714792  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1119 22:31:40.718459  236895 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1119 22:31:40.718484  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1119 22:31:41.576861  236895 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:31:41.590467  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1119 22:31:41.594505  236895 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1119 22:31:41.594530  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1119 22:31:41.686208  236895 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1119 22:31:41.693249  236895 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1119 22:31:41.693283  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1119 22:31:41.903881  236895 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:31:41.911838  236895 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 22:31:41.924057  236895 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:31:41.939118  236895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1119 22:31:41.950776  236895 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:31:41.954251  236895 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:31:41.963584  236895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:31:42.043084  236895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:31:42.062714  236895 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067 for IP: 192.168.103.2
	I1119 22:31:42.062736  236895 certs.go:195] generating shared ca certs ...
	I1119 22:31:42.062756  236895 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:42.062924  236895 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:31:42.062976  236895 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:31:42.062988  236895 certs.go:257] generating profile certs ...
	I1119 22:31:42.063052  236895 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.key
	I1119 22:31:42.063068  236895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.crt with IP's: []
	I1119 22:31:42.206714  236895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.crt ...
	I1119 22:31:42.206736  236895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.crt: {Name:mkbf8185214b3afe154937a25705f170fe9d3a63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:42.206882  236895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.key ...
	I1119 22:31:42.206893  236895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.key: {Name:mk3c3e164572a1e0cdc4815b8e3c0371b8198c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:42.206963  236895 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.key.4c430d4d
	I1119 22:31:42.206977  236895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.crt.4c430d4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1119 22:31:42.426438  236895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.crt.4c430d4d ...
	I1119 22:31:42.426461  236895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.crt.4c430d4d: {Name:mkca2ebde2e8b3915fe7e821291c45200b0cae87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:42.426605  236895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.key.4c430d4d ...
	I1119 22:31:42.426620  236895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.key.4c430d4d: {Name:mked2f455eb75e64a5e7887e8a7382ba52329cb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:42.426689  236895 certs.go:382] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.crt.4c430d4d -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.crt
	I1119 22:31:42.426759  236895 certs.go:386] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.key.4c430d4d -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.key
	I1119 22:31:42.426811  236895 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/proxy-client.key
	I1119 22:31:42.426837  236895 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/proxy-client.crt with IP's: []
	I1119 22:31:42.492521  236895 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/proxy-client.crt ...
	I1119 22:31:42.492542  236895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/proxy-client.crt: {Name:mk2ea76e08e39dae4bd78f713513fd75fa30064f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:42.492666  236895 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/proxy-client.key ...
	I1119 22:31:42.492678  236895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/proxy-client.key: {Name:mkeb96be0929d2e490c14415d47f10dc632b6452 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:42.492857  236895 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:31:42.492889  236895 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:31:42.492898  236895 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:31:42.492918  236895 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:31:42.492939  236895 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:31:42.492962  236895 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:31:42.493004  236895 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:31:42.493476  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:31:42.512105  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:31:42.528686  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:31:42.544954  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:31:42.561137  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:31:42.577032  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:31:42.593199  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:31:42.609459  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:31:42.626466  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:31:42.644564  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:31:42.660959  236895 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:31:42.677268  236895 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:31:42.689148  236895 ssh_runner.go:195] Run: openssl version
	I1119 22:31:42.694970  236895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:31:42.703023  236895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:31:42.706594  236895 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:31:42.706633  236895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:31:42.741548  236895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:31:42.749714  236895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:31:42.758646  236895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:31:42.762458  236895 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:31:42.762509  236895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:31:42.799600  236895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:31:42.809403  236895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:31:42.819056  236895 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:31:42.822894  236895 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:31:42.822938  236895 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:31:42.857242  236895 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:31:42.865754  236895 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:31:42.869181  236895 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:31:42.869228  236895 kubeadm.go:401] StartCluster: {Name:no-preload-178067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-178067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:31:42.869308  236895 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:31:42.869339  236895 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:31:42.895418  236895 cri.go:89] found id: ""
	I1119 22:31:42.895459  236895 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:31:42.902925  236895 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:31:42.910595  236895 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:31:42.910637  236895 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:31:42.918019  236895 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:31:42.918034  236895 kubeadm.go:158] found existing configuration files:
	
	I1119 22:31:42.918071  236895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:31:42.926061  236895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:31:42.926105  236895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:31:42.932808  236895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:31:42.939999  236895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:31:42.940043  236895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:31:42.946903  236895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:31:42.953954  236895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:31:42.953996  236895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:31:42.960856  236895 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:31:42.967886  236895 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:31:42.967920  236895 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:31:42.974679  236895 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:31:43.028744  236895 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:31:43.082567  236895 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1119 22:31:42.116226  230837 node_ready.go:57] node "old-k8s-version-680619" has "Ready":"False" status (will retry)
	W1119 22:31:44.616128  230837 node_ready.go:57] node "old-k8s-version-680619" has "Ready":"False" status (will retry)
	I1119 22:31:42.714032  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:31:42.714080  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	W1119 22:31:47.115895  230837 node_ready.go:57] node "old-k8s-version-680619" has "Ready":"False" status (will retry)
	W1119 22:31:49.115967  230837 node_ready.go:57] node "old-k8s-version-680619" has "Ready":"False" status (will retry)
	I1119 22:31:51.423140  236895 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:31:51.423203  236895 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:31:51.423314  236895 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:31:51.423387  236895 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:31:51.423440  236895 kubeadm.go:319] OS: Linux
	I1119 22:31:51.423486  236895 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:31:51.423540  236895 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:31:51.423608  236895 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:31:51.423674  236895 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:31:51.423742  236895 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:31:51.423797  236895 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:31:51.423863  236895 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:31:51.423903  236895 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:31:51.423965  236895 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:31:51.424048  236895 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:31:51.424128  236895 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:31:51.424191  236895 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:31:51.425350  236895 out.go:252]   - Generating certificates and keys ...
	I1119 22:31:51.425417  236895 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:31:51.425502  236895 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:31:51.425587  236895 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:31:51.425656  236895 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:31:51.425743  236895 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:31:51.425865  236895 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:31:51.425943  236895 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:31:51.426130  236895 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-178067] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:31:51.426216  236895 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:31:51.426368  236895 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-178067] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:31:51.426472  236895 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:31:51.426533  236895 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:31:51.426573  236895 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:31:51.426621  236895 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:31:51.426665  236895 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:31:51.426732  236895 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:31:51.426796  236895 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:31:51.426912  236895 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:31:51.426991  236895 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:31:51.427109  236895 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:31:51.427214  236895 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:31:51.429151  236895 out.go:252]   - Booting up control plane ...
	I1119 22:31:51.429252  236895 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:31:51.429340  236895 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:31:51.429449  236895 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:31:51.429613  236895 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:31:51.429695  236895 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:31:51.429811  236895 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:31:51.429950  236895 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:31:51.429993  236895 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:31:51.430144  236895 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:31:51.430290  236895 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:31:51.430378  236895 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.587804ms
	I1119 22:31:51.430511  236895 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:31:51.430633  236895 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 22:31:51.430764  236895 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:31:51.430899  236895 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:31:51.431012  236895 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005302295s
	I1119 22:31:51.431100  236895 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.899021664s
	I1119 22:31:51.431163  236895 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501742664s
	I1119 22:31:51.431257  236895 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:31:51.431390  236895 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:31:51.431454  236895 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:31:51.431743  236895 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-178067 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:31:51.431848  236895 kubeadm.go:319] [bootstrap-token] Using token: ndeh72.roy3z8e79je08se6
	I1119 22:31:51.432989  236895 out.go:252]   - Configuring RBAC rules ...
	I1119 22:31:51.433125  236895 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:31:51.433259  236895 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:31:51.433463  236895 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:31:51.433588  236895 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:31:51.433754  236895 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:31:51.433881  236895 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:31:51.434024  236895 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:31:51.434092  236895 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:31:51.434153  236895 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:31:51.434174  236895 kubeadm.go:319] 
	I1119 22:31:51.434263  236895 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:31:51.434274  236895 kubeadm.go:319] 
	I1119 22:31:51.434352  236895 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:31:51.434361  236895 kubeadm.go:319] 
	I1119 22:31:51.434388  236895 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:31:51.434450  236895 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:31:51.434536  236895 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:31:51.434546  236895 kubeadm.go:319] 
	I1119 22:31:51.434625  236895 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:31:51.434635  236895 kubeadm.go:319] 
	I1119 22:31:51.434713  236895 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:31:51.434727  236895 kubeadm.go:319] 
	I1119 22:31:51.434808  236895 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:31:51.434933  236895 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:31:51.435056  236895 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:31:51.435072  236895 kubeadm.go:319] 
	I1119 22:31:51.435171  236895 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:31:51.435324  236895 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:31:51.435338  236895 kubeadm.go:319] 
	I1119 22:31:51.435447  236895 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ndeh72.roy3z8e79je08se6 \
	I1119 22:31:51.435624  236895 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b \
	I1119 22:31:51.435664  236895 kubeadm.go:319] 	--control-plane 
	I1119 22:31:51.435670  236895 kubeadm.go:319] 
	I1119 22:31:51.435790  236895 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:31:51.435808  236895 kubeadm.go:319] 
	I1119 22:31:51.435972  236895 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ndeh72.roy3z8e79je08se6 \
	I1119 22:31:51.436140  236895 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b 
	I1119 22:31:51.436154  236895 cni.go:84] Creating CNI manager for ""
	I1119 22:31:51.436163  236895 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:31:51.437892  236895 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:31:47.714777  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:31:47.714848  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:51.245132  230837 node_ready.go:49] node "old-k8s-version-680619" is "Ready"
	I1119 22:31:51.245168  230837 node_ready.go:38] duration metric: took 13.632000816s for node "old-k8s-version-680619" to be "Ready" ...
	I1119 22:31:51.245185  230837 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:31:51.245243  230837 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:31:51.261338  230837 api_server.go:72] duration metric: took 13.984755605s to wait for apiserver process to appear ...
	I1119 22:31:51.261368  230837 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:31:51.261390  230837 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1119 22:31:51.267420  230837 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1119 22:31:51.268906  230837 api_server.go:141] control plane version: v1.28.0
	I1119 22:31:51.268936  230837 api_server.go:131] duration metric: took 7.559984ms to wait for apiserver health ...
	I1119 22:31:51.268947  230837 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:31:51.274087  230837 system_pods.go:59] 8 kube-system pods found
	I1119 22:31:51.274135  230837 system_pods.go:61] "coredns-5dd5756b68-7bkvq" [24ad2a97-5b78-4886-85c8-52a4b867d20f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:31:51.274150  230837 system_pods.go:61] "etcd-old-k8s-version-680619" [7972b29a-4239-494f-aee7-90c207aca5da] Running
	I1119 22:31:51.274159  230837 system_pods.go:61] "kindnet-mf7gh" [550a5209-7fe8-450e-9e4b-00c04bf8c100] Running
	I1119 22:31:51.274172  230837 system_pods.go:61] "kube-apiserver-old-k8s-version-680619" [a9887614-0d54-4d5f-8c0a-d83bf13d7bc2] Running
	I1119 22:31:51.274246  230837 system_pods.go:61] "kube-controller-manager-old-k8s-version-680619" [420813c6-38d8-4258-b84f-591800d4e28b] Running
	I1119 22:31:51.274267  230837 system_pods.go:61] "kube-proxy-4xxp4" [74c1052a-f35c-4184-9c26-f8aaebd30451] Running
	I1119 22:31:51.274285  230837 system_pods.go:61] "kube-scheduler-old-k8s-version-680619" [55027018-f305-4926-9366-3e6947d5c40a] Running
	I1119 22:31:51.274298  230837 system_pods.go:61] "storage-provisioner" [43b02c7c-446f-4009-9bfe-f47cf1054208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:31:51.274308  230837 system_pods.go:74] duration metric: took 5.352177ms to wait for pod list to return data ...
	I1119 22:31:51.274756  230837 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:31:51.276945  230837 default_sa.go:45] found service account: "default"
	I1119 22:31:51.276965  230837 default_sa.go:55] duration metric: took 2.192657ms for default service account to be created ...
	I1119 22:31:51.276972  230837 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:31:51.279714  230837 system_pods.go:86] 8 kube-system pods found
	I1119 22:31:51.279736  230837 system_pods.go:89] "coredns-5dd5756b68-7bkvq" [24ad2a97-5b78-4886-85c8-52a4b867d20f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:31:51.279741  230837 system_pods.go:89] "etcd-old-k8s-version-680619" [7972b29a-4239-494f-aee7-90c207aca5da] Running
	I1119 22:31:51.279747  230837 system_pods.go:89] "kindnet-mf7gh" [550a5209-7fe8-450e-9e4b-00c04bf8c100] Running
	I1119 22:31:51.279752  230837 system_pods.go:89] "kube-apiserver-old-k8s-version-680619" [a9887614-0d54-4d5f-8c0a-d83bf13d7bc2] Running
	I1119 22:31:51.279756  230837 system_pods.go:89] "kube-controller-manager-old-k8s-version-680619" [420813c6-38d8-4258-b84f-591800d4e28b] Running
	I1119 22:31:51.279759  230837 system_pods.go:89] "kube-proxy-4xxp4" [74c1052a-f35c-4184-9c26-f8aaebd30451] Running
	I1119 22:31:51.279763  230837 system_pods.go:89] "kube-scheduler-old-k8s-version-680619" [55027018-f305-4926-9366-3e6947d5c40a] Running
	I1119 22:31:51.279770  230837 system_pods.go:89] "storage-provisioner" [43b02c7c-446f-4009-9bfe-f47cf1054208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:31:51.279788  230837 retry.go:31] will retry after 311.328023ms: missing components: kube-dns
	I1119 22:31:51.594940  230837 system_pods.go:86] 8 kube-system pods found
	I1119 22:31:51.594972  230837 system_pods.go:89] "coredns-5dd5756b68-7bkvq" [24ad2a97-5b78-4886-85c8-52a4b867d20f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:31:51.594979  230837 system_pods.go:89] "etcd-old-k8s-version-680619" [7972b29a-4239-494f-aee7-90c207aca5da] Running
	I1119 22:31:51.594986  230837 system_pods.go:89] "kindnet-mf7gh" [550a5209-7fe8-450e-9e4b-00c04bf8c100] Running
	I1119 22:31:51.594989  230837 system_pods.go:89] "kube-apiserver-old-k8s-version-680619" [a9887614-0d54-4d5f-8c0a-d83bf13d7bc2] Running
	I1119 22:31:51.594994  230837 system_pods.go:89] "kube-controller-manager-old-k8s-version-680619" [420813c6-38d8-4258-b84f-591800d4e28b] Running
	I1119 22:31:51.594998  230837 system_pods.go:89] "kube-proxy-4xxp4" [74c1052a-f35c-4184-9c26-f8aaebd30451] Running
	I1119 22:31:51.595003  230837 system_pods.go:89] "kube-scheduler-old-k8s-version-680619" [55027018-f305-4926-9366-3e6947d5c40a] Running
	I1119 22:31:51.595010  230837 system_pods.go:89] "storage-provisioner" [43b02c7c-446f-4009-9bfe-f47cf1054208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:31:51.595028  230837 retry.go:31] will retry after 313.796901ms: missing components: kube-dns
	I1119 22:31:51.912692  230837 system_pods.go:86] 8 kube-system pods found
	I1119 22:31:51.912716  230837 system_pods.go:89] "coredns-5dd5756b68-7bkvq" [24ad2a97-5b78-4886-85c8-52a4b867d20f] Running
	I1119 22:31:51.912721  230837 system_pods.go:89] "etcd-old-k8s-version-680619" [7972b29a-4239-494f-aee7-90c207aca5da] Running
	I1119 22:31:51.912733  230837 system_pods.go:89] "kindnet-mf7gh" [550a5209-7fe8-450e-9e4b-00c04bf8c100] Running
	I1119 22:31:51.912736  230837 system_pods.go:89] "kube-apiserver-old-k8s-version-680619" [a9887614-0d54-4d5f-8c0a-d83bf13d7bc2] Running
	I1119 22:31:51.912741  230837 system_pods.go:89] "kube-controller-manager-old-k8s-version-680619" [420813c6-38d8-4258-b84f-591800d4e28b] Running
	I1119 22:31:51.912744  230837 system_pods.go:89] "kube-proxy-4xxp4" [74c1052a-f35c-4184-9c26-f8aaebd30451] Running
	I1119 22:31:51.912747  230837 system_pods.go:89] "kube-scheduler-old-k8s-version-680619" [55027018-f305-4926-9366-3e6947d5c40a] Running
	I1119 22:31:51.912750  230837 system_pods.go:89] "storage-provisioner" [43b02c7c-446f-4009-9bfe-f47cf1054208] Running
	I1119 22:31:51.912758  230837 system_pods.go:126] duration metric: took 635.781046ms to wait for k8s-apps to be running ...
	I1119 22:31:51.912768  230837 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:31:51.912845  230837 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:31:51.926075  230837 system_svc.go:56] duration metric: took 13.299436ms WaitForService to wait for kubelet
	I1119 22:31:51.926098  230837 kubeadm.go:587] duration metric: took 14.649523142s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:31:51.926120  230837 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:31:51.928162  230837 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:31:51.928186  230837 node_conditions.go:123] node cpu capacity is 8
	I1119 22:31:51.928202  230837 node_conditions.go:105] duration metric: took 2.076678ms to run NodePressure ...
	I1119 22:31:51.928217  230837 start.go:242] waiting for startup goroutines ...
	I1119 22:31:51.928230  230837 start.go:247] waiting for cluster config update ...
	I1119 22:31:51.928245  230837 start.go:256] writing updated cluster config ...
	I1119 22:31:51.928495  230837 ssh_runner.go:195] Run: rm -f paused
	I1119 22:31:51.932606  230837 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:31:51.935973  230837 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-7bkvq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:51.939794  230837 pod_ready.go:94] pod "coredns-5dd5756b68-7bkvq" is "Ready"
	I1119 22:31:51.939840  230837 pod_ready.go:86] duration metric: took 3.840886ms for pod "coredns-5dd5756b68-7bkvq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:51.942371  230837 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:51.946084  230837 pod_ready.go:94] pod "etcd-old-k8s-version-680619" is "Ready"
	I1119 22:31:51.946102  230837 pod_ready.go:86] duration metric: took 3.712588ms for pod "etcd-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:51.948495  230837 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:51.952290  230837 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-680619" is "Ready"
	I1119 22:31:51.952310  230837 pod_ready.go:86] duration metric: took 3.794572ms for pod "kube-apiserver-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:51.954503  230837 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:52.336334  230837 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-680619" is "Ready"
	I1119 22:31:52.336364  230837 pod_ready.go:86] duration metric: took 381.843929ms for pod "kube-controller-manager-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:52.536647  230837 pod_ready.go:83] waiting for pod "kube-proxy-4xxp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:52.935851  230837 pod_ready.go:94] pod "kube-proxy-4xxp4" is "Ready"
	I1119 22:31:52.935871  230837 pod_ready.go:86] duration metric: took 399.201242ms for pod "kube-proxy-4xxp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:53.137372  230837 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:53.536246  230837 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-680619" is "Ready"
	I1119 22:31:53.536270  230837 pod_ready.go:86] duration metric: took 398.876199ms for pod "kube-scheduler-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:31:53.536280  230837 pod_ready.go:40] duration metric: took 1.603646601s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:31:53.579079  230837 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 22:31:53.581558  230837 out.go:203] 
	W1119 22:31:53.582754  230837 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:31:53.583954  230837 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:31:53.585254  230837 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-680619" cluster and "default" namespace by default
	I1119 22:31:51.439006  236895 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:31:51.444014  236895 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:31:51.444030  236895 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:31:51.457296  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:31:51.657215  236895 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:31:51.657293  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:51.657317  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-178067 minikube.k8s.io/updated_at=2025_11_19T22_31_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=no-preload-178067 minikube.k8s.io/primary=true
	I1119 22:31:51.738407  236895 ops.go:34] apiserver oom_adj: -16
	I1119 22:31:51.738548  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:52.239145  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:52.738739  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:53.238806  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:53.738679  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:54.239247  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:54.738689  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:55.238705  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:55.738756  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:56.238890  236895 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:31:56.303436  236895 kubeadm.go:1114] duration metric: took 4.646197891s to wait for elevateKubeSystemPrivileges
	I1119 22:31:56.303480  236895 kubeadm.go:403] duration metric: took 13.434251843s to StartCluster
	I1119 22:31:56.303498  236895 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:56.303568  236895 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:31:56.305216  236895 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:31:56.305439  236895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:31:56.305452  236895 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:31:56.305538  236895 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:31:56.305647  236895 addons.go:70] Setting storage-provisioner=true in profile "no-preload-178067"
	I1119 22:31:56.305657  236895 config.go:182] Loaded profile config "no-preload-178067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:31:56.305668  236895 addons.go:239] Setting addon storage-provisioner=true in "no-preload-178067"
	I1119 22:31:56.305667  236895 addons.go:70] Setting default-storageclass=true in profile "no-preload-178067"
	I1119 22:31:56.305692  236895 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-178067"
	I1119 22:31:56.305700  236895 host.go:66] Checking if "no-preload-178067" exists ...
	I1119 22:31:56.306089  236895 cli_runner.go:164] Run: docker container inspect no-preload-178067 --format={{.State.Status}}
	I1119 22:31:56.306242  236895 cli_runner.go:164] Run: docker container inspect no-preload-178067 --format={{.State.Status}}
	I1119 22:31:56.307663  236895 out.go:179] * Verifying Kubernetes components...
	I1119 22:31:56.309018  236895 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:31:56.328668  236895 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:31:52.716143  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:31:52.716184  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:53.073717  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": read tcp 192.168.94.1:46494->192.168.94.2:8443: read: connection reset by peer
	I1119 22:31:53.213067  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:53.213413  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:53.713186  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:53.713609  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:54.212242  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:54.212656  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:54.712969  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:54.713344  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:55.212957  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:55.213338  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:55.713034  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:55.713390  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:56.212961  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:31:56.213343  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:31:56.329614  236895 addons.go:239] Setting addon default-storageclass=true in "no-preload-178067"
	I1119 22:31:56.329659  236895 host.go:66] Checking if "no-preload-178067" exists ...
	I1119 22:31:56.329833  236895 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:31:56.329850  236895 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:31:56.329900  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:56.330134  236895 cli_runner.go:164] Run: docker container inspect no-preload-178067 --format={{.State.Status}}
	I1119 22:31:56.359452  236895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa Username:docker}
	I1119 22:31:56.359535  236895 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:31:56.359551  236895 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:31:56.359606  236895 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:31:56.380878  236895 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa Username:docker}
	I1119 22:31:56.398067  236895 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:31:56.457353  236895 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:31:56.481766  236895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:31:56.490211  236895 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:31:56.575937  236895 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 22:31:56.577346  236895 node_ready.go:35] waiting up to 6m0s for node "no-preload-178067" to be "Ready" ...
	I1119 22:31:56.767273  236895 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:31:56.768380  236895 addons.go:515] duration metric: took 462.840817ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:31:57.080380  236895 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-178067" context rescaled to 1 replicas
	W1119 22:31:58.580415  236895 node_ready.go:57] node "no-preload-178067" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 22:31:51 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:51.296054808Z" level=info msg="Starting container: 848f5dc105b7508a64bb319e80dd850448921629ed3c48931587914d3823819d" id=e7ea9c86-2c5b-4cf3-a5c7-82177b4b5107 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:31:51 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:51.297892874Z" level=info msg="Started container" PID=2154 containerID=848f5dc105b7508a64bb319e80dd850448921629ed3c48931587914d3823819d description=kube-system/coredns-5dd5756b68-7bkvq/coredns id=e7ea9c86-2c5b-4cf3-a5c7-82177b4b5107 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0da421152f2ea2ea6bdfd7cc6be11f6af3f60e6dd8aaf83131048e57ab86f6f0
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.026435283Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9d95fea4-b56a-48b0-a0f0-f792db369b6d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.026505011Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.031512503Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6d09419053329608f2b232e2e06c95c95483a31de055275089e9216ee6e50a55 UID:e61d10ef-eb12-4b20-83e7-48341a04a48a NetNS:/var/run/netns/01633372-5757-4a9e-9e81-e53458ab32d3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009bea08}] Aliases:map[]}"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.031538326Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.046334526Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6d09419053329608f2b232e2e06c95c95483a31de055275089e9216ee6e50a55 UID:e61d10ef-eb12-4b20-83e7-48341a04a48a NetNS:/var/run/netns/01633372-5757-4a9e-9e81-e53458ab32d3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009bea08}] Aliases:map[]}"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.046474893Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.047281825Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.048479086Z" level=info msg="Ran pod sandbox 6d09419053329608f2b232e2e06c95c95483a31de055275089e9216ee6e50a55 with infra container: default/busybox/POD" id=9d95fea4-b56a-48b0-a0f0-f792db369b6d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.049487797Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c94f9190-4feb-4220-8afc-bb0d54e3cf65 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.049612663Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c94f9190-4feb-4220-8afc-bb0d54e3cf65 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.049649105Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c94f9190-4feb-4220-8afc-bb0d54e3cf65 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.050122482Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7f7e1a5c-9cb5-4781-8d92-18d367132895 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.051532212Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.694561219Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7f7e1a5c-9cb5-4781-8d92-18d367132895 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.695430305Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d8e7fc87-07eb-4200-abf8-12aa733d1ac9 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.696754794Z" level=info msg="Creating container: default/busybox/busybox" id=317d2587-f297-4f31-9f24-58bfb06833ba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.696888642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.701867427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.702379028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.732578198Z" level=info msg="Created container 146eee67bf1e50b2ac305caf591a3176c66a45217de032c57d625fd7d0452138: default/busybox/busybox" id=317d2587-f297-4f31-9f24-58bfb06833ba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.733127398Z" level=info msg="Starting container: 146eee67bf1e50b2ac305caf591a3176c66a45217de032c57d625fd7d0452138" id=2f342cef-7e2b-4f7c-9c3c-691dec877851 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:31:54 old-k8s-version-680619 crio[773]: time="2025-11-19T22:31:54.734990849Z" level=info msg="Started container" PID=2236 containerID=146eee67bf1e50b2ac305caf591a3176c66a45217de032c57d625fd7d0452138 description=default/busybox/busybox id=2f342cef-7e2b-4f7c-9c3c-691dec877851 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6d09419053329608f2b232e2e06c95c95483a31de055275089e9216ee6e50a55
	Nov 19 22:32:00 old-k8s-version-680619 crio[773]: time="2025-11-19T22:32:00.806217826Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	146eee67bf1e5       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   6d09419053329       busybox                                          default
	848f5dc105b75       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      10 seconds ago      Running             coredns                   0                   0da421152f2ea       coredns-5dd5756b68-7bkvq                         kube-system
	4182c6be9b8fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   da5f454551758       storage-provisioner                              kube-system
	62ef5dd704465       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   a6efe8cf6dae0       kindnet-mf7gh                                    kube-system
	b4134ee667eb8       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   f67d7e0ba7e67       kube-proxy-4xxp4                                 kube-system
	fe8bdb4930d2f       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      42 seconds ago      Running             kube-scheduler            0                   66379a632bfa7       kube-scheduler-old-k8s-version-680619            kube-system
	8476c1cab026d       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      42 seconds ago      Running             kube-controller-manager   0                   e940a9a4f0650       kube-controller-manager-old-k8s-version-680619   kube-system
	085d7cfb5db99       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      42 seconds ago      Running             kube-apiserver            0                   6ff8aef839f7a       kube-apiserver-old-k8s-version-680619            kube-system
	1e38108c4d012       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      42 seconds ago      Running             etcd                      0                   c4155b61f7bc8       etcd-old-k8s-version-680619                      kube-system
	
	
	==> coredns [848f5dc105b7508a64bb319e80dd850448921629ed3c48931587914d3823819d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:37833 - 63271 "HINFO IN 8439797293108704340.2521175014273798964. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.486338721s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-680619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-680619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-680619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_31_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:31:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-680619
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:31:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:31:55 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:31:55 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:31:55 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:31:55 +0000   Wed, 19 Nov 2025 22:31:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-680619
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                58ea2120-251a-483f-9bb0-1cfccac1ceba
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-7bkvq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-680619                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-mf7gh                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-680619             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-680619    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-4xxp4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-680619             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 38s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s   kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s   kubelet          Node old-k8s-version-680619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s   kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node old-k8s-version-680619 event: Registered Node old-k8s-version-680619 in Controller
	  Normal  NodeReady                12s   kubelet          Node old-k8s-version-680619 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [1e38108c4d012d7093a163554aab1b55c3fcd02dd601a162a9d93df51ea525b0] <==
	{"level":"info","ts":"2025-11-19T22:31:19.80585Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T22:31:19.805883Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T22:31:19.805879Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:31:19.805913Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:31:20.496244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-19T22:31:20.49629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-19T22:31:20.496324Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-19T22:31:20.49634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:31:20.496354Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:31:20.496366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-19T22:31:20.496379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:31:20.497229Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-680619 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:31:20.497269Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:31:20.497347Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:31:20.497329Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:31:20.497472Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:31:20.497512Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:31:20.498333Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:31:20.49866Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:31:20.49911Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-19T22:31:20.499489Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:31:20.499685Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-19T22:31:50.97999Z","caller":"traceutil/trace.go:171","msg":"trace[2140987113] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"159.509292ms","start":"2025-11-19T22:31:50.820455Z","end":"2025-11-19T22:31:50.979964Z","steps":["trace[2140987113] 'process raft request'  (duration: 64.075785ms)","trace[2140987113] 'compare'  (duration: 95.306375ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:31:51.243288Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.835014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-680619\" ","response":"range_response_count:1 size:5540"}
	{"level":"info","ts":"2025-11-19T22:31:51.24338Z","caller":"traceutil/trace.go:171","msg":"trace[1862825857] range","detail":"{range_begin:/registry/minions/old-k8s-version-680619; range_end:; response_count:1; response_revision:389; }","duration":"127.942219ms","start":"2025-11-19T22:31:51.115421Z","end":"2025-11-19T22:31:51.243364Z","steps":["trace[1862825857] 'range keys from in-memory index tree'  (duration: 127.728611ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:32:02 up  1:14,  0 user,  load average: 3.62, 2.93, 1.79
	Linux old-k8s-version-680619 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [62ef5dd704465e673d2c0ffbbee567269b8272382a50257d9a840db7be4cedcd] <==
	I1119 22:31:40.370005       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:31:40.370308       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:31:40.370450       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:31:40.370468       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:31:40.370497       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:31:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:31:40.671147       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:31:40.671302       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:31:40.671344       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:31:40.671640       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:31:40.971604       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:31:40.971628       1 metrics.go:72] Registering metrics
	I1119 22:31:40.971674       1 controller.go:711] "Syncing nftables rules"
	I1119 22:31:50.576929       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:31:50.576994       1 main.go:301] handling current node
	I1119 22:32:00.577909       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:32:00.577951       1 main.go:301] handling current node
	
	
	==> kube-apiserver [085d7cfb5db99259128f3053fa796cb86830004c046667015185996e94ebce35] <==
	I1119 22:31:21.602574       1 aggregator.go:166] initial CRD sync complete...
	I1119 22:31:21.602588       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 22:31:21.602595       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:31:21.602601       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:31:21.603377       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 22:31:21.609984       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:31:21.613590       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:31:21.614028       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1119 22:31:21.614112       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 22:31:21.614446       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:31:22.506969       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:31:22.510320       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:31:22.510338       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:31:22.867488       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:31:22.896146       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:31:23.014889       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:31:23.019526       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:31:23.020261       1 controller.go:624] quota admission added evaluator for: endpoints
	I1119 22:31:23.023308       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:31:23.561340       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:31:24.644846       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:31:24.654433       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:31:24.666402       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1119 22:31:37.167266       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:31:37.324595       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8476c1cab026dc1b34da1d41bf6e12492d9b05b4bca350dcdb4cc77c4e0a55cf] <==
	I1119 22:31:36.561341       1 shared_informer.go:318] Caches are synced for cronjob
	I1119 22:31:36.561372       1 shared_informer.go:318] Caches are synced for daemon sets
	I1119 22:31:36.579647       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:31:36.634866       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:31:36.956699       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:31:36.999221       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:31:36.999248       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:31:37.170100       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1119 22:31:37.339454       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4xxp4"
	I1119 22:31:37.342208       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mf7gh"
	I1119 22:31:37.425773       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bxxxm"
	I1119 22:31:37.433027       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-7bkvq"
	I1119 22:31:37.440715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="270.747267ms"
	I1119 22:31:37.449065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.292755ms"
	I1119 22:31:37.449145       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.152µs"
	I1119 22:31:37.636842       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1119 22:31:37.647317       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bxxxm"
	I1119 22:31:37.654456       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.181902ms"
	I1119 22:31:37.661306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.802503ms"
	I1119 22:31:37.661542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.891µs"
	I1119 22:31:50.797118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="128.747µs"
	I1119 22:31:50.981905       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.022µs"
	I1119 22:31:51.412313       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1119 22:31:51.826125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.302265ms"
	I1119 22:31:51.826223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.519µs"
	
	
	==> kube-proxy [b4134ee667eb82155d293fd0bfe568d4cd8be697ef0d8b8102128fb3394284a1] <==
	I1119 22:31:37.742452       1 server_others.go:69] "Using iptables proxy"
	I1119 22:31:37.752175       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 22:31:37.772361       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:31:37.774876       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:31:37.774911       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:31:37.774919       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:31:37.774939       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:31:37.775171       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:31:37.775187       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:31:37.775935       1 config.go:315] "Starting node config controller"
	I1119 22:31:37.776009       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:31:37.776349       1 config.go:188] "Starting service config controller"
	I1119 22:31:37.776394       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:31:37.776448       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:31:37.776466       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:31:37.876465       1 shared_informer.go:318] Caches are synced for node config
	I1119 22:31:37.876629       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 22:31:37.876738       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [fe8bdb4930d2fe3bf225e271fd109ebada96f2a2667104dd6e0b679b1511022c] <==
	W1119 22:31:21.579195       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1119 22:31:21.579210       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1119 22:31:21.579305       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:31:21.579330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:31:21.579752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1119 22:31:21.579777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1119 22:31:21.579868       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:31:21.579877       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1119 22:31:21.579883       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1119 22:31:21.579935       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1119 22:31:21.579951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1119 22:31:21.579899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1119 22:31:22.394763       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1119 22:31:22.394794       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1119 22:31:22.418216       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1119 22:31:22.418254       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1119 22:31:22.584944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1119 22:31:22.584973       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1119 22:31:22.620455       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1119 22:31:22.620504       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1119 22:31:22.677676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1119 22:31:22.677709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1119 22:31:22.782223       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1119 22:31:22.782261       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1119 22:31:24.674982       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:31:36 old-k8s-version-680619 kubelet[1402]: I1119 22:31:36.479800    1402 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.346710    1402 topology_manager.go:215] "Topology Admit Handler" podUID="550a5209-7fe8-450e-9e4b-00c04bf8c100" podNamespace="kube-system" podName="kindnet-mf7gh"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.349283    1402 topology_manager.go:215] "Topology Admit Handler" podUID="74c1052a-f35c-4184-9c26-f8aaebd30451" podNamespace="kube-system" podName="kube-proxy-4xxp4"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.397382    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/74c1052a-f35c-4184-9c26-f8aaebd30451-kube-proxy\") pod \"kube-proxy-4xxp4\" (UID: \"74c1052a-f35c-4184-9c26-f8aaebd30451\") " pod="kube-system/kube-proxy-4xxp4"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.397789    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74c1052a-f35c-4184-9c26-f8aaebd30451-lib-modules\") pod \"kube-proxy-4xxp4\" (UID: \"74c1052a-f35c-4184-9c26-f8aaebd30451\") " pod="kube-system/kube-proxy-4xxp4"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.398057    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/550a5209-7fe8-450e-9e4b-00c04bf8c100-xtables-lock\") pod \"kindnet-mf7gh\" (UID: \"550a5209-7fe8-450e-9e4b-00c04bf8c100\") " pod="kube-system/kindnet-mf7gh"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.398741    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74c1052a-f35c-4184-9c26-f8aaebd30451-xtables-lock\") pod \"kube-proxy-4xxp4\" (UID: \"74c1052a-f35c-4184-9c26-f8aaebd30451\") " pod="kube-system/kube-proxy-4xxp4"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.398795    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w2nf\" (UniqueName: \"kubernetes.io/projected/74c1052a-f35c-4184-9c26-f8aaebd30451-kube-api-access-4w2nf\") pod \"kube-proxy-4xxp4\" (UID: \"74c1052a-f35c-4184-9c26-f8aaebd30451\") " pod="kube-system/kube-proxy-4xxp4"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.398863    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtkbz\" (UniqueName: \"kubernetes.io/projected/550a5209-7fe8-450e-9e4b-00c04bf8c100-kube-api-access-dtkbz\") pod \"kindnet-mf7gh\" (UID: \"550a5209-7fe8-450e-9e4b-00c04bf8c100\") " pod="kube-system/kindnet-mf7gh"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.398915    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/550a5209-7fe8-450e-9e4b-00c04bf8c100-cni-cfg\") pod \"kindnet-mf7gh\" (UID: \"550a5209-7fe8-450e-9e4b-00c04bf8c100\") " pod="kube-system/kindnet-mf7gh"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.398947    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/550a5209-7fe8-450e-9e4b-00c04bf8c100-lib-modules\") pod \"kindnet-mf7gh\" (UID: \"550a5209-7fe8-450e-9e4b-00c04bf8c100\") " pod="kube-system/kindnet-mf7gh"
	Nov 19 22:31:37 old-k8s-version-680619 kubelet[1402]: I1119 22:31:37.777600    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4xxp4" podStartSLOduration=0.777537528 podCreationTimestamp="2025-11-19 22:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:31:37.776537227 +0000 UTC m=+13.155510943" watchObservedRunningTime="2025-11-19 22:31:37.777537528 +0000 UTC m=+13.156511219"
	Nov 19 22:31:40 old-k8s-version-680619 kubelet[1402]: I1119 22:31:40.790666    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mf7gh" podStartSLOduration=1.31306773 podCreationTimestamp="2025-11-19 22:31:37 +0000 UTC" firstStartedPulling="2025-11-19 22:31:37.659672984 +0000 UTC m=+13.038646666" lastFinishedPulling="2025-11-19 22:31:40.137213683 +0000 UTC m=+15.516187371" observedRunningTime="2025-11-19 22:31:40.790443172 +0000 UTC m=+16.169416863" watchObservedRunningTime="2025-11-19 22:31:40.790608435 +0000 UTC m=+16.169582126"
	Nov 19 22:31:50 old-k8s-version-680619 kubelet[1402]: I1119 22:31:50.768311    1402 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 19 22:31:50 old-k8s-version-680619 kubelet[1402]: I1119 22:31:50.793431    1402 topology_manager.go:215] "Topology Admit Handler" podUID="43b02c7c-446f-4009-9bfe-f47cf1054208" podNamespace="kube-system" podName="storage-provisioner"
	Nov 19 22:31:50 old-k8s-version-680619 kubelet[1402]: I1119 22:31:50.797323    1402 topology_manager.go:215] "Topology Admit Handler" podUID="24ad2a97-5b78-4886-85c8-52a4b867d20f" podNamespace="kube-system" podName="coredns-5dd5756b68-7bkvq"
	Nov 19 22:31:50 old-k8s-version-680619 kubelet[1402]: I1119 22:31:50.901627    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/43b02c7c-446f-4009-9bfe-f47cf1054208-tmp\") pod \"storage-provisioner\" (UID: \"43b02c7c-446f-4009-9bfe-f47cf1054208\") " pod="kube-system/storage-provisioner"
	Nov 19 22:31:50 old-k8s-version-680619 kubelet[1402]: I1119 22:31:50.901675    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrw4c\" (UniqueName: \"kubernetes.io/projected/43b02c7c-446f-4009-9bfe-f47cf1054208-kube-api-access-wrw4c\") pod \"storage-provisioner\" (UID: \"43b02c7c-446f-4009-9bfe-f47cf1054208\") " pod="kube-system/storage-provisioner"
	Nov 19 22:31:50 old-k8s-version-680619 kubelet[1402]: I1119 22:31:50.901709    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zpg\" (UniqueName: \"kubernetes.io/projected/24ad2a97-5b78-4886-85c8-52a4b867d20f-kube-api-access-f6zpg\") pod \"coredns-5dd5756b68-7bkvq\" (UID: \"24ad2a97-5b78-4886-85c8-52a4b867d20f\") " pod="kube-system/coredns-5dd5756b68-7bkvq"
	Nov 19 22:31:50 old-k8s-version-680619 kubelet[1402]: I1119 22:31:50.901849    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24ad2a97-5b78-4886-85c8-52a4b867d20f-config-volume\") pod \"coredns-5dd5756b68-7bkvq\" (UID: \"24ad2a97-5b78-4886-85c8-52a4b867d20f\") " pod="kube-system/coredns-5dd5756b68-7bkvq"
	Nov 19 22:31:51 old-k8s-version-680619 kubelet[1402]: I1119 22:31:51.810083    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.810037093 podCreationTimestamp="2025-11-19 22:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:31:51.809934167 +0000 UTC m=+27.188907880" watchObservedRunningTime="2025-11-19 22:31:51.810037093 +0000 UTC m=+27.189010784"
	Nov 19 22:31:53 old-k8s-version-680619 kubelet[1402]: I1119 22:31:53.724943    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7bkvq" podStartSLOduration=16.724884697 podCreationTimestamp="2025-11-19 22:31:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:31:51.81950223 +0000 UTC m=+27.198475937" watchObservedRunningTime="2025-11-19 22:31:53.724884697 +0000 UTC m=+29.103858389"
	Nov 19 22:31:53 old-k8s-version-680619 kubelet[1402]: I1119 22:31:53.725209    1402 topology_manager.go:215] "Topology Admit Handler" podUID="e61d10ef-eb12-4b20-83e7-48341a04a48a" podNamespace="default" podName="busybox"
	Nov 19 22:31:53 old-k8s-version-680619 kubelet[1402]: I1119 22:31:53.820402    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7txh\" (UniqueName: \"kubernetes.io/projected/e61d10ef-eb12-4b20-83e7-48341a04a48a-kube-api-access-p7txh\") pod \"busybox\" (UID: \"e61d10ef-eb12-4b20-83e7-48341a04a48a\") " pod="default/busybox"
	Nov 19 22:31:54 old-k8s-version-680619 kubelet[1402]: I1119 22:31:54.818087    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.172926943 podCreationTimestamp="2025-11-19 22:31:53 +0000 UTC" firstStartedPulling="2025-11-19 22:31:54.049842078 +0000 UTC m=+29.428815759" lastFinishedPulling="2025-11-19 22:31:54.694953685 +0000 UTC m=+30.073927367" observedRunningTime="2025-11-19 22:31:54.817641204 +0000 UTC m=+30.196614895" watchObservedRunningTime="2025-11-19 22:31:54.818038551 +0000 UTC m=+30.197012242"
	
	
	==> storage-provisioner [4182c6be9b8fd4323b921cc0bba08627a4b4973c71c6796006ae78305993af0d] <==
	I1119 22:31:51.303404       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:31:51.312239       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:31:51.312296       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:31:51.317738       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:31:51.317905       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680619_e8dc1bb3-4e40-4d19-b32e-35ee36b8b529!
	I1119 22:31:51.317923       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa8da102-18e8-4e00-96cc-7642d9f355a2", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-680619_e8dc1bb3-4e40-4d19-b32e-35ee36b8b529 became leader
	I1119 22:31:51.418703       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680619_e8dc1bb3-4e40-4d19-b32e-35ee36b8b529!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680619 -n old-k8s-version-680619
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-680619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (230.77385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:32:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-178067 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-178067 describe deploy/metrics-server -n kube-system: exit status 1 (56.611751ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-178067 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-178067
helpers_test.go:243: (dbg) docker inspect no-preload-178067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37",
	        "Created": "2025-11-19T22:31:25.543221838Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 237383,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:31:25.581527105Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/hostname",
	        "HostsPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/hosts",
	        "LogPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37-json.log",
	        "Name": "/no-preload-178067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-178067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37",
	                "LowerDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178067",
	                "Source": "/var/lib/docker/volumes/no-preload-178067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178067",
	                "name.minikube.sigs.k8s.io": "no-preload-178067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a489fd76f60d471b5dcdbd9ae5fc83f1c13e9c5f8b10f05230eeac3a94bf7277",
	            "SandboxKey": "/var/run/docker/netns/a489fd76f60d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-178067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2927e8174830464514428039b44b26b0e43356a4a3627c8d30f3646150dbf7f",
	                    "EndpointID": "499a25ff43827b8fb29ada62dd649c09d83a3dac651fffc7c0236eeea4705729",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "6a:4a:62:41:b5:ae",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178067",
	                        "4349f03a9605"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-178067 -n no-preload-178067
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-178067 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-654834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p cilium-654834 sudo crio config                                                                                                                                                                                                             │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ delete  │ -p cilium-654834                                                                                                                                                                                                                              │ cilium-654834             │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p missing-upgrade-015670 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                                             │ missing-upgrade-015670    │ jenkins │ v1.32.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ delete  │ -p running-upgrade-083468                                                                                                                                                                                                                     │ running-upgrade-083468    │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p missing-upgrade-015670 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-015670    │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ stop    │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p NoKubernetes-662839 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ stop    │ -p kubernetes-upgrade-801704                                                                                                                                                                                                                  │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ delete  │ -p missing-upgrade-015670                                                                                                                                                                                                                     │ missing-upgrade-015670    │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p old-k8s-version-680619 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:32:19
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:32:19.171310  243333 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:32:19.171408  243333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:32:19.171420  243333 out.go:374] Setting ErrFile to fd 2...
	I1119 22:32:19.171425  243333 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:32:19.171655  243333 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:32:19.172153  243333 out.go:368] Setting JSON to false
	I1119 22:32:19.173413  243333 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4487,"bootTime":1763587052,"procs":290,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:32:19.173492  243333 start.go:143] virtualization: kvm guest
	I1119 22:32:19.175376  243333 out.go:179] * [old-k8s-version-680619] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:32:19.176749  243333 notify.go:221] Checking for updates...
	I1119 22:32:19.176774  243333 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:32:19.177896  243333 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:32:19.178950  243333 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:32:19.179941  243333 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:32:19.180977  243333 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:32:19.182054  243333 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:32:19.183460  243333 config.go:182] Loaded profile config "old-k8s-version-680619": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:32:19.184886  243333 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1119 22:32:19.185782  243333 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:32:19.209665  243333 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:32:19.209746  243333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:32:19.268369  243333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:32:19.258872553 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:32:19.268472  243333 docker.go:319] overlay module found
	I1119 22:32:19.270045  243333 out.go:179] * Using the docker driver based on existing profile
	I1119 22:32:19.271048  243333 start.go:309] selected driver: docker
	I1119 22:32:19.271059  243333 start.go:930] validating driver "docker" against &{Name:old-k8s-version-680619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-680619 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:32:19.271132  243333 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:32:19.271659  243333 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:32:19.326201  243333 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:32:19.316840005 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:32:19.326520  243333 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:32:19.326567  243333 cni.go:84] Creating CNI manager for ""
	I1119 22:32:19.326625  243333 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:32:19.326690  243333 start.go:353] cluster config:
	{Name:old-k8s-version-680619 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-680619 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:32:19.328376  243333 out.go:179] * Starting "old-k8s-version-680619" primary control-plane node in "old-k8s-version-680619" cluster
	I1119 22:32:19.329335  243333 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:32:19.330423  243333 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:32:19.331382  243333 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 22:32:19.331408  243333 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1119 22:32:19.331418  243333 cache.go:65] Caching tarball of preloaded images
	I1119 22:32:19.331471  243333 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:32:19.331498  243333 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:32:19.331508  243333 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1119 22:32:19.331603  243333 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/config.json ...
	I1119 22:32:19.352239  243333 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:32:19.352255  243333 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:32:19.352271  243333 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:32:19.352291  243333 start.go:360] acquireMachinesLock for old-k8s-version-680619: {Name:mk482151dc19afba1d6b77116c4df371236cb304 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:32:19.352358  243333 start.go:364] duration metric: took 35.263µs to acquireMachinesLock for "old-k8s-version-680619"
	I1119 22:32:19.352376  243333 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:32:19.352381  243333 fix.go:54] fixHost starting: 
	I1119 22:32:19.352582  243333 cli_runner.go:164] Run: docker container inspect old-k8s-version-680619 --format={{.State.Status}}
	I1119 22:32:19.369881  243333 fix.go:112] recreateIfNeeded on old-k8s-version-680619: state=Stopped err=<nil>
	W1119 22:32:19.369904  243333 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 19 22:32:09 no-preload-178067 crio[767]: time="2025-11-19T22:32:09.931458379Z" level=info msg="Starting container: 03f712ff49489052f57dbd240ddf96ceff58549fb585d7657f9f84ef589ac857" id=5408c999-d14e-41f2-b67d-dd2b50cbb0f5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:32:09 no-preload-178067 crio[767]: time="2025-11-19T22:32:09.933244183Z" level=info msg="Started container" PID=2922 containerID=03f712ff49489052f57dbd240ddf96ceff58549fb585d7657f9f84ef589ac857 description=kube-system/coredns-66bc5c9577-9dwxr/coredns id=5408c999-d14e-41f2-b67d-dd2b50cbb0f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cf50192f3b7730f3480b8563a2f6a889d7759d484f86aad97470e08ff6a926dc
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.192978073Z" level=info msg="Running pod sandbox: default/busybox/POD" id=155cc56d-1952-490a-a030-bf9692cf04ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.193048104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.197464924Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:11258176f2b2ce93100ece65932f6e32126da7e32b69f9dd5c6b782eb4115305 UID:6825c6dc-8105-48a5-9e63-ebb599a140e5 NetNS:/var/run/netns/b4291ddb-aefa-4370-b5f1-c3de8682c065 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520798}] Aliases:map[]}"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.197493586Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.21179275Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:11258176f2b2ce93100ece65932f6e32126da7e32b69f9dd5c6b782eb4115305 UID:6825c6dc-8105-48a5-9e63-ebb599a140e5 NetNS:/var/run/netns/b4291ddb-aefa-4370-b5f1-c3de8682c065 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520798}] Aliases:map[]}"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.211945085Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.212595204Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.213385623Z" level=info msg="Ran pod sandbox 11258176f2b2ce93100ece65932f6e32126da7e32b69f9dd5c6b782eb4115305 with infra container: default/busybox/POD" id=155cc56d-1952-490a-a030-bf9692cf04ec name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.214406441Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2b0e370a-eaed-47c7-94cf-fa3af16c588c name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.21450186Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2b0e370a-eaed-47c7-94cf-fa3af16c588c name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.214532176Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2b0e370a-eaed-47c7-94cf-fa3af16c588c name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.215033436Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0831b50e-05af-4187-9dca-97f47b738b44 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.216374938Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.894207198Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0831b50e-05af-4187-9dca-97f47b738b44 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.894749161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b4144938-daa7-4e43-ad9b-b9e5c03d5cd5 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.895935947Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=531677ba-aa35-4b89-8a8c-2f08830dde90 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.898714192Z" level=info msg="Creating container: default/busybox/busybox" id=0a679ffe-3d85-4c5a-9656-34fd867b7d0a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.898839612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.902132481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.902576457Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.925914909Z" level=info msg="Created container 979554d7c160d1886ebbedcba2581b131d3f85fd7fb810198e280fe7f7ee5d1f: default/busybox/busybox" id=0a679ffe-3d85-4c5a-9656-34fd867b7d0a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.926383926Z" level=info msg="Starting container: 979554d7c160d1886ebbedcba2581b131d3f85fd7fb810198e280fe7f7ee5d1f" id=84a3b209-0f12-4118-a093-001a13810de4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:32:13 no-preload-178067 crio[767]: time="2025-11-19T22:32:13.928000579Z" level=info msg="Started container" PID=3003 containerID=979554d7c160d1886ebbedcba2581b131d3f85fd7fb810198e280fe7f7ee5d1f description=default/busybox/busybox id=84a3b209-0f12-4118-a093-001a13810de4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=11258176f2b2ce93100ece65932f6e32126da7e32b69f9dd5c6b782eb4115305
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	979554d7c160d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   11258176f2b2c       busybox                                     default
	03f712ff49489       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   cf50192f3b773       coredns-66bc5c9577-9dwxr                    kube-system
	d0a7120e262aa       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   9f55cf7251cec       storage-provisioner                         kube-system
	fbda8900400d5       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   71319a46fcb89       kindnet-4rclw                               kube-system
	dfc6a8feecb7f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   ce99f1b156636       kube-proxy-xll6z                            kube-system
	2763c890f641b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   426bd7cf2d281       kube-apiserver-no-preload-178067            kube-system
	d1eb4963bda01       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   3e51dc33ab01d       etcd-no-preload-178067                      kube-system
	822909f3d5b76       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   43b1a4763fecb       kube-controller-manager-no-preload-178067   kube-system
	de3f7bb632702       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   cf4bdf981e156       kube-scheduler-no-preload-178067            kube-system
	
	
	==> coredns [03f712ff49489052f57dbd240ddf96ceff58549fb585d7657f9f84ef589ac857] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56897 - 31762 "HINFO IN 1422382474157982818.4075435164453404289. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.157591777s
	
	
	==> describe nodes <==
	Name:               no-preload-178067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-178067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-178067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_31_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:31:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-178067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:32:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:32:09 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:32:09 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:32:09 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:32:09 +0000   Wed, 19 Nov 2025 22:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-178067
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                4f7d1af3-d456-499c-ab45-67c0314eb59f
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-9dwxr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-178067                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-4rclw                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-178067             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-178067    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-xll6z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-178067             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node no-preload-178067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node no-preload-178067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node no-preload-178067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node no-preload-178067 event: Registered Node no-preload-178067 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-178067 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [d1eb4963bda019aa8d6f2365fdef091ade7b061b92cf29d58a7662b9ea71163f] <==
	{"level":"warn","ts":"2025-11-19T22:31:47.823182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.829128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.835478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.843664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.849883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.857923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.864649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.871581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.877591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.883766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.890798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.897779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.904354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.910758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.918052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.925908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.932500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.945494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.951332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:47.957356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:48.005143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:31:50.980279Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.946852ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790131540556117 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/coredns\" value_size:112 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:31:50.980501Z","caller":"traceutil/trace.go:172","msg":"trace[569724808] transaction","detail":"{read_only:false; response_revision:247; number_of_response:1; }","duration":"145.856094ms","start":"2025-11-19T22:31:50.834626Z","end":"2025-11-19T22:31:50.980482Z","steps":["trace[569724808] 'process raft request'  (duration: 26.317403ms)","trace[569724808] 'compare'  (duration: 118.822067ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:31:51.219398Z","caller":"traceutil/trace.go:172","msg":"trace[1401508854] transaction","detail":"{read_only:false; response_revision:250; number_of_response:1; }","duration":"145.005125ms","start":"2025-11-19T22:31:51.074373Z","end":"2025-11-19T22:31:51.219378Z","steps":["trace[1401508854] 'process raft request'  (duration: 56.574553ms)","trace[1401508854] 'compare'  (duration: 88.286089ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:31:51.219505Z","caller":"traceutil/trace.go:172","msg":"trace[1248594520] transaction","detail":"{read_only:false; response_revision:251; number_of_response:1; }","duration":"143.571515ms","start":"2025-11-19T22:31:51.075898Z","end":"2025-11-19T22:31:51.219469Z","steps":["trace[1248594520] 'process raft request'  (duration: 143.420349ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:32:21 up  1:14,  0 user,  load average: 2.67, 2.75, 1.75
	Linux no-preload-178067 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fbda8900400d564f5f521306fa6847f6d5ec22f6e05dac2e414492fc3a8ae1cd] <==
	I1119 22:31:59.233262       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:31:59.233507       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:31:59.233632       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:31:59.233648       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:31:59.233673       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:31:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:31:59.434246       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:31:59.434282       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:31:59.434294       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:31:59.434456       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:31:59.735140       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:31:59.735161       1 metrics.go:72] Registering metrics
	I1119 22:31:59.735209       1 controller.go:711] "Syncing nftables rules"
	I1119 22:32:09.440916       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:32:09.440957       1 main.go:301] handling current node
	I1119 22:32:19.436904       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:32:19.436952       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2763c890f641b9c1b281204b187170a021a093f7f497637c634855bf09a3bc8b] <==
	I1119 22:31:48.460196       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:31:48.460703       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:31:48.464539       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:31:48.464675       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:31:48.469640       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:31:48.469864       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:31:48.644380       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:31:49.364202       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:31:49.368707       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:31:49.368726       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:31:49.759324       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:31:49.791871       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:31:49.866488       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:31:49.871438       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 22:31:49.872410       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:31:49.875810       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:31:50.384031       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:31:50.984951       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:31:51.244567       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:31:51.253316       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:31:55.487770       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:31:55.491068       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:31:56.134600       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:31:56.438468       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1119 22:32:19.988068       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:56914: use of closed network connection
	
	
	==> kube-controller-manager [822909f3d5b76c6b3713cdb22ab5dc246466fea7b4d9c7aedd69131cf076af16] <==
	I1119 22:31:55.369031       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:31:55.370735       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-178067" podCIDRs=["10.244.0.0/24"]
	I1119 22:31:55.375875       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:31:55.381753       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:31:55.381798       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:31:55.381824       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:31:55.381833       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:31:55.383029       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:31:55.383051       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:31:55.383247       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:31:55.383281       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:31:55.383420       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:31:55.383501       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:31:55.384399       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:31:55.384418       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:31:55.384465       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:31:55.384574       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:31:55.384603       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:31:55.385579       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:31:55.389847       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:31:55.389854       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:31:55.396103       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:31:55.396131       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:31:55.411608       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:32:10.336083       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dfc6a8feecb7fb592c19eb67c4791f66e6d27ab40f44058a2b125abe8c2a737c] <==
	I1119 22:31:57.137237       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:31:57.201391       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:31:57.302284       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:31:57.302311       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:31:57.302402       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:31:57.319887       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:31:57.319925       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:31:57.324957       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:31:57.325254       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:31:57.325277       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:31:57.327640       1 config.go:200] "Starting service config controller"
	I1119 22:31:57.327710       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:31:57.327675       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:31:57.327918       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:31:57.327810       1 config.go:309] "Starting node config controller"
	I1119 22:31:57.327953       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:31:57.327712       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:31:57.327976       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:31:57.327976       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:31:57.428395       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:31:57.428417       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:31:57.428520       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [de3f7bb6327026f0190e086c979dd42f842a03e36bf316a3afda31f68bca580b] <==
	E1119 22:31:48.407753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:31:48.407763       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:31:48.407837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:31:48.412219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:31:48.412473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:31:48.412515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:31:48.412582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:31:48.412622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:31:48.412735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:31:48.412780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:31:48.412837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:31:48.412872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:31:48.412984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:31:48.413056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:31:49.234086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:31:49.263060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:31:49.302026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:31:49.302793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:31:49.385344       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:31:49.401473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:31:49.430576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:31:49.549900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 22:31:49.568889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:31:49.593030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1119 22:31:52.406115       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:31:55 no-preload-178067 kubelet[2290]: I1119 22:31:55.416489    2290 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: I1119 22:31:56.193966    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6fd057bd-ec63-4d13-830e-4f06eb80b192-cni-cfg\") pod \"kindnet-4rclw\" (UID: \"6fd057bd-ec63-4d13-830e-4f06eb80b192\") " pod="kube-system/kindnet-4rclw"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: I1119 22:31:56.193999    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6fd057bd-ec63-4d13-830e-4f06eb80b192-lib-modules\") pod \"kindnet-4rclw\" (UID: \"6fd057bd-ec63-4d13-830e-4f06eb80b192\") " pod="kube-system/kindnet-4rclw"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: I1119 22:31:56.194017    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e08767f7-3973-41f2-b6d6-c03adb24002f-xtables-lock\") pod \"kube-proxy-xll6z\" (UID: \"e08767f7-3973-41f2-b6d6-c03adb24002f\") " pod="kube-system/kube-proxy-xll6z"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: I1119 22:31:56.194031    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e08767f7-3973-41f2-b6d6-c03adb24002f-kube-proxy\") pod \"kube-proxy-xll6z\" (UID: \"e08767f7-3973-41f2-b6d6-c03adb24002f\") " pod="kube-system/kube-proxy-xll6z"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: I1119 22:31:56.194045    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e08767f7-3973-41f2-b6d6-c03adb24002f-lib-modules\") pod \"kube-proxy-xll6z\" (UID: \"e08767f7-3973-41f2-b6d6-c03adb24002f\") " pod="kube-system/kube-proxy-xll6z"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: I1119 22:31:56.194083    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vpnf\" (UniqueName: \"kubernetes.io/projected/6fd057bd-ec63-4d13-830e-4f06eb80b192-kube-api-access-9vpnf\") pod \"kindnet-4rclw\" (UID: \"6fd057bd-ec63-4d13-830e-4f06eb80b192\") " pod="kube-system/kindnet-4rclw"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: I1119 22:31:56.194167    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brwdg\" (UniqueName: \"kubernetes.io/projected/e08767f7-3973-41f2-b6d6-c03adb24002f-kube-api-access-brwdg\") pod \"kube-proxy-xll6z\" (UID: \"e08767f7-3973-41f2-b6d6-c03adb24002f\") " pod="kube-system/kube-proxy-xll6z"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: I1119 22:31:56.194195    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6fd057bd-ec63-4d13-830e-4f06eb80b192-xtables-lock\") pod \"kindnet-4rclw\" (UID: \"6fd057bd-ec63-4d13-830e-4f06eb80b192\") " pod="kube-system/kindnet-4rclw"
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: E1119 22:31:56.300067    2290 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: E1119 22:31:56.300104    2290 projected.go:196] Error preparing data for projected volume kube-api-access-brwdg for pod kube-system/kube-proxy-xll6z: configmap "kube-root-ca.crt" not found
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: E1119 22:31:56.300069    2290 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: E1119 22:31:56.300188    2290 projected.go:196] Error preparing data for projected volume kube-api-access-9vpnf for pod kube-system/kindnet-4rclw: configmap "kube-root-ca.crt" not found
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: E1119 22:31:56.300173    2290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e08767f7-3973-41f2-b6d6-c03adb24002f-kube-api-access-brwdg podName:e08767f7-3973-41f2-b6d6-c03adb24002f nodeName:}" failed. No retries permitted until 2025-11-19 22:31:56.800148435 +0000 UTC m=+6.208781378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-brwdg" (UniqueName: "kubernetes.io/projected/e08767f7-3973-41f2-b6d6-c03adb24002f-kube-api-access-brwdg") pod "kube-proxy-xll6z" (UID: "e08767f7-3973-41f2-b6d6-c03adb24002f") : configmap "kube-root-ca.crt" not found
	Nov 19 22:31:56 no-preload-178067 kubelet[2290]: E1119 22:31:56.300262    2290 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6fd057bd-ec63-4d13-830e-4f06eb80b192-kube-api-access-9vpnf podName:6fd057bd-ec63-4d13-830e-4f06eb80b192 nodeName:}" failed. No retries permitted until 2025-11-19 22:31:56.800244207 +0000 UTC m=+6.208877139 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9vpnf" (UniqueName: "kubernetes.io/projected/6fd057bd-ec63-4d13-830e-4f06eb80b192-kube-api-access-9vpnf") pod "kindnet-4rclw" (UID: "6fd057bd-ec63-4d13-830e-4f06eb80b192") : configmap "kube-root-ca.crt" not found
	Nov 19 22:31:57 no-preload-178067 kubelet[2290]: I1119 22:31:57.712467    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xll6z" podStartSLOduration=1.712447831 podStartE2EDuration="1.712447831s" podCreationTimestamp="2025-11-19 22:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:31:57.712301746 +0000 UTC m=+7.120934685" watchObservedRunningTime="2025-11-19 22:31:57.712447831 +0000 UTC m=+7.121080779"
	Nov 19 22:31:59 no-preload-178067 kubelet[2290]: I1119 22:31:59.726327    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4rclw" podStartSLOduration=1.710215289 podStartE2EDuration="3.72630949s" podCreationTimestamp="2025-11-19 22:31:56 +0000 UTC" firstStartedPulling="2025-11-19 22:31:57.06595936 +0000 UTC m=+6.474592304" lastFinishedPulling="2025-11-19 22:31:59.082053575 +0000 UTC m=+8.490686505" observedRunningTime="2025-11-19 22:31:59.72620861 +0000 UTC m=+9.134841556" watchObservedRunningTime="2025-11-19 22:31:59.72630949 +0000 UTC m=+9.134942438"
	Nov 19 22:32:09 no-preload-178067 kubelet[2290]: I1119 22:32:09.558082    2290 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:32:09 no-preload-178067 kubelet[2290]: I1119 22:32:09.682016    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/044b16d2-608c-4973-adee-cf95ff049440-tmp\") pod \"storage-provisioner\" (UID: \"044b16d2-608c-4973-adee-cf95ff049440\") " pod="kube-system/storage-provisioner"
	Nov 19 22:32:09 no-preload-178067 kubelet[2290]: I1119 22:32:09.682054    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c78484a3-8cd0-4d6a-831c-727426b3d407-config-volume\") pod \"coredns-66bc5c9577-9dwxr\" (UID: \"c78484a3-8cd0-4d6a-831c-727426b3d407\") " pod="kube-system/coredns-66bc5c9577-9dwxr"
	Nov 19 22:32:09 no-preload-178067 kubelet[2290]: I1119 22:32:09.682072    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gtj8\" (UniqueName: \"kubernetes.io/projected/c78484a3-8cd0-4d6a-831c-727426b3d407-kube-api-access-9gtj8\") pod \"coredns-66bc5c9577-9dwxr\" (UID: \"c78484a3-8cd0-4d6a-831c-727426b3d407\") " pod="kube-system/coredns-66bc5c9577-9dwxr"
	Nov 19 22:32:09 no-preload-178067 kubelet[2290]: I1119 22:32:09.682090    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfxkd\" (UniqueName: \"kubernetes.io/projected/044b16d2-608c-4973-adee-cf95ff049440-kube-api-access-qfxkd\") pod \"storage-provisioner\" (UID: \"044b16d2-608c-4973-adee-cf95ff049440\") " pod="kube-system/storage-provisioner"
	Nov 19 22:32:10 no-preload-178067 kubelet[2290]: I1119 22:32:10.749124    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9dwxr" podStartSLOduration=14.749105726 podStartE2EDuration="14.749105726s" podCreationTimestamp="2025-11-19 22:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:32:10.738796792 +0000 UTC m=+20.147429739" watchObservedRunningTime="2025-11-19 22:32:10.749105726 +0000 UTC m=+20.157738674"
	Nov 19 22:32:12 no-preload-178067 kubelet[2290]: I1119 22:32:12.888076    2290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.888056918 podStartE2EDuration="16.888056918s" podCreationTimestamp="2025-11-19 22:31:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:32:10.757133846 +0000 UTC m=+20.165766792" watchObservedRunningTime="2025-11-19 22:32:12.888056918 +0000 UTC m=+22.296689866"
	Nov 19 22:32:12 no-preload-178067 kubelet[2290]: I1119 22:32:12.998761    2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78x5\" (UniqueName: \"kubernetes.io/projected/6825c6dc-8105-48a5-9e63-ebb599a140e5-kube-api-access-f78x5\") pod \"busybox\" (UID: \"6825c6dc-8105-48a5-9e63-ebb599a140e5\") " pod="default/busybox"
	
	
	==> storage-provisioner [d0a7120e262aa52a9ae7750866f24bc6fe1f27592496c869fd057a86166c96e9] <==
	I1119 22:32:09.934900       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:32:09.943912       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:32:09.943955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:32:09.945770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:09.950808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:32:09.950957       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:32:09.951095       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-178067_2ddb6828-5850-47dc-8d83-19fb26e94f86!
	I1119 22:32:09.951063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"410535e3-f1a2-4daf-93d0-dd88f3003fa0", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-178067_2ddb6828-5850-47dc-8d83-19fb26e94f86 became leader
	W1119 22:32:09.952795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:09.955887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:32:10.051622       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-178067_2ddb6828-5850-47dc-8d83-19fb26e94f86!
	W1119 22:32:11.959495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:11.963983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:13.966311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:13.970307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:15.973567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:15.977064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:17.979529       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:17.983100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:19.985773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:32:19.989672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-178067 -n no-preload-178067
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-178067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-680619 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-680619 --alsologtostderr -v=1: exit status 80 (1.747858458s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-680619 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:33:15.146895  255006 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:33:15.147148  255006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:15.147158  255006 out.go:374] Setting ErrFile to fd 2...
	I1119 22:33:15.147162  255006 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:15.147364  255006 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:33:15.147637  255006 out.go:368] Setting JSON to false
	I1119 22:33:15.147688  255006 mustload.go:66] Loading cluster: old-k8s-version-680619
	I1119 22:33:15.148140  255006 config.go:182] Loaded profile config "old-k8s-version-680619": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:33:15.148719  255006 cli_runner.go:164] Run: docker container inspect old-k8s-version-680619 --format={{.State.Status}}
	I1119 22:33:15.167043  255006 host.go:66] Checking if "old-k8s-version-680619" exists ...
	I1119 22:33:15.167318  255006 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:15.227396  255006 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-19 22:33:15.217102763 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:15.228034  255006 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-680619 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:33:15.229947  255006 out.go:179] * Pausing node old-k8s-version-680619 ... 
	I1119 22:33:15.230923  255006 host.go:66] Checking if "old-k8s-version-680619" exists ...
	I1119 22:33:15.231175  255006 ssh_runner.go:195] Run: systemctl --version
	I1119 22:33:15.231226  255006 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-680619
	I1119 22:33:15.248416  255006 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/old-k8s-version-680619/id_rsa Username:docker}
	I1119 22:33:15.337860  255006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:15.349750  255006 pause.go:52] kubelet running: true
	I1119 22:33:15.349832  255006 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:33:15.510924  255006 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:33:15.511030  255006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:33:15.593743  255006 cri.go:89] found id: "73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b"
	I1119 22:33:15.593770  255006 cri.go:89] found id: "5eb7bc276d9ede83cf7f9707c5d154ff245634aef28f4966849644db5a50f3a7"
	I1119 22:33:15.593775  255006 cri.go:89] found id: "414de1bfb6aec59dd4d86bf9f2fea33a808f25024113c43be7b1c30c813216b0"
	I1119 22:33:15.593781  255006 cri.go:89] found id: "631bdbae35af1a0fab26aaa35346ef686031049223c980f1c1523d8c16183109"
	I1119 22:33:15.593785  255006 cri.go:89] found id: "60edd37d9535be6816f9e4f45d547b93a9514cd2c28698c56bf7d909151f9696"
	I1119 22:33:15.593793  255006 cri.go:89] found id: "89080922c0159e21e61091b24b9351b5cb28d703c1ed3ad99034c55326191766"
	I1119 22:33:15.593797  255006 cri.go:89] found id: "8f84773f448215b180ee3539cd8a463b1872e20afd8aa7857fae9f872b39a9c0"
	I1119 22:33:15.593801  255006 cri.go:89] found id: "b26645bb067934e3f245a0dc0ee3200d5ec7b936438cf91b80afef3be85e62af"
	I1119 22:33:15.593851  255006 cri.go:89] found id: "b40d5aa13f1581f4d75fa92e103d0cc9932c695d82287850952ad9cce1d98ba5"
	I1119 22:33:15.593861  255006 cri.go:89] found id: "81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a"
	I1119 22:33:15.593866  255006 cri.go:89] found id: "bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22"
	I1119 22:33:15.593870  255006 cri.go:89] found id: ""
	I1119 22:33:15.593925  255006 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:33:15.610042  255006 retry.go:31] will retry after 337.038926ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:15Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:33:15.947581  255006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:15.962021  255006 pause.go:52] kubelet running: false
	I1119 22:33:15.962075  255006 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:33:16.131243  255006 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:33:16.131348  255006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:33:16.211416  255006 cri.go:89] found id: "73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b"
	I1119 22:33:16.211525  255006 cri.go:89] found id: "5eb7bc276d9ede83cf7f9707c5d154ff245634aef28f4966849644db5a50f3a7"
	I1119 22:33:16.211534  255006 cri.go:89] found id: "414de1bfb6aec59dd4d86bf9f2fea33a808f25024113c43be7b1c30c813216b0"
	I1119 22:33:16.211540  255006 cri.go:89] found id: "631bdbae35af1a0fab26aaa35346ef686031049223c980f1c1523d8c16183109"
	I1119 22:33:16.211544  255006 cri.go:89] found id: "60edd37d9535be6816f9e4f45d547b93a9514cd2c28698c56bf7d909151f9696"
	I1119 22:33:16.211549  255006 cri.go:89] found id: "89080922c0159e21e61091b24b9351b5cb28d703c1ed3ad99034c55326191766"
	I1119 22:33:16.211553  255006 cri.go:89] found id: "8f84773f448215b180ee3539cd8a463b1872e20afd8aa7857fae9f872b39a9c0"
	I1119 22:33:16.211556  255006 cri.go:89] found id: "b26645bb067934e3f245a0dc0ee3200d5ec7b936438cf91b80afef3be85e62af"
	I1119 22:33:16.211559  255006 cri.go:89] found id: "b40d5aa13f1581f4d75fa92e103d0cc9932c695d82287850952ad9cce1d98ba5"
	I1119 22:33:16.211568  255006 cri.go:89] found id: "81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a"
	I1119 22:33:16.211572  255006 cri.go:89] found id: "bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22"
	I1119 22:33:16.211576  255006 cri.go:89] found id: ""
	I1119 22:33:16.211618  255006 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:33:16.224440  255006 retry.go:31] will retry after 322.025448ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:16Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:33:16.546977  255006 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:16.558988  255006 pause.go:52] kubelet running: false
	I1119 22:33:16.559041  255006 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:33:16.741778  255006 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:33:16.741970  255006 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:33:16.814202  255006 cri.go:89] found id: "73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b"
	I1119 22:33:16.814228  255006 cri.go:89] found id: "5eb7bc276d9ede83cf7f9707c5d154ff245634aef28f4966849644db5a50f3a7"
	I1119 22:33:16.814234  255006 cri.go:89] found id: "414de1bfb6aec59dd4d86bf9f2fea33a808f25024113c43be7b1c30c813216b0"
	I1119 22:33:16.814239  255006 cri.go:89] found id: "631bdbae35af1a0fab26aaa35346ef686031049223c980f1c1523d8c16183109"
	I1119 22:33:16.814242  255006 cri.go:89] found id: "60edd37d9535be6816f9e4f45d547b93a9514cd2c28698c56bf7d909151f9696"
	I1119 22:33:16.814259  255006 cri.go:89] found id: "89080922c0159e21e61091b24b9351b5cb28d703c1ed3ad99034c55326191766"
	I1119 22:33:16.814263  255006 cri.go:89] found id: "8f84773f448215b180ee3539cd8a463b1872e20afd8aa7857fae9f872b39a9c0"
	I1119 22:33:16.814266  255006 cri.go:89] found id: "b26645bb067934e3f245a0dc0ee3200d5ec7b936438cf91b80afef3be85e62af"
	I1119 22:33:16.814270  255006 cri.go:89] found id: "b40d5aa13f1581f4d75fa92e103d0cc9932c695d82287850952ad9cce1d98ba5"
	I1119 22:33:16.814288  255006 cri.go:89] found id: "81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a"
	I1119 22:33:16.814295  255006 cri.go:89] found id: "bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22"
	I1119 22:33:16.814300  255006 cri.go:89] found id: ""
	I1119 22:33:16.814345  255006 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:33:16.829323  255006 out.go:203] 
	W1119 22:33:16.830552  255006 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:33:16.830567  255006 out.go:285] * 
	* 
	W1119 22:33:16.835238  255006 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:33:16.836446  255006 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-680619 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-680619
helpers_test.go:243: (dbg) docker inspect old-k8s-version-680619:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919",
	        "Created": "2025-11-19T22:31:10.323294154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243531,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:32:19.395015357Z",
	            "FinishedAt": "2025-11-19T22:32:18.544824931Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/hostname",
	        "HostsPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/hosts",
	        "LogPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919-json.log",
	        "Name": "/old-k8s-version-680619",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-680619:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-680619",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919",
	                "LowerDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-680619",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-680619/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-680619",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-680619",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-680619",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1356c24c10d3ffe8c88ef299c6d1288cd3d6953d2434bbd08bc1e77831e86e03",
	            "SandboxKey": "/var/run/docker/netns/1356c24c10d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-680619": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d9a9064074d1b313e6d7afbded8c0b7d9aaeb41b178a1f248c1547e69e77bbc",
	                    "EndpointID": "c7bd25fb5f5055fb189e9c8af21547e98d61a73f7e3d735c8db895bb4fbd59d0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "5a:48:7c:94:d0:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-680619",
	                        "08365271d4a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680619 -n old-k8s-version-680619
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680619 -n old-k8s-version-680619: exit status 2 (352.186792ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-680619 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-680619 logs -n 25: (1.114377822s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p missing-upgrade-015670 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-015670    │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ stop    │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p NoKubernetes-662839 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ stop    │ -p kubernetes-upgrade-801704                                                                                                                                                                                                                  │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ delete  │ -p missing-upgrade-015670                                                                                                                                                                                                                     │ missing-upgrade-015670    │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p old-k8s-version-680619 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p no-preload-178067 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p no-preload-178067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-855818    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380        │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:33:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:33:01.497334  252325 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:33:01.497583  252325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:01.497591  252325 out.go:374] Setting ErrFile to fd 2...
	I1119 22:33:01.497595  252325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:01.497760  252325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:33:01.498196  252325 out.go:368] Setting JSON to false
	I1119 22:33:01.499292  252325 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4529,"bootTime":1763587052,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:33:01.499361  252325 start.go:143] virtualization: kvm guest
	I1119 22:33:01.501379  252325 out.go:179] * [embed-certs-443380] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:33:01.504213  252325 notify.go:221] Checking for updates...
	I1119 22:33:01.504227  252325 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:33:01.505367  252325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:33:01.506547  252325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:33:01.507637  252325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:33:01.508761  252325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:33:01.509901  252325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:32:59.120306  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:32:59.120361  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:32:59.120420  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:32:59.146689  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:32:59.146706  229026 cri.go:89] found id: "49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3"
	I1119 22:32:59.146710  229026 cri.go:89] found id: ""
	I1119 22:32:59.146717  229026 logs.go:282] 2 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f 49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3]
	I1119 22:32:59.146767  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:32:59.150639  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:32:59.154354  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:32:59.154408  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:32:59.179641  229026 cri.go:89] found id: ""
	I1119 22:32:59.179658  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.179664  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:32:59.179670  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:32:59.179715  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:32:59.205860  229026 cri.go:89] found id: ""
	I1119 22:32:59.205880  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.205889  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:32:59.205896  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:32:59.205938  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:32:59.232101  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:32:59.232121  229026 cri.go:89] found id: ""
	I1119 22:32:59.232130  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:32:59.232174  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:32:59.235754  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:32:59.235805  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:32:59.259838  229026 cri.go:89] found id: ""
	I1119 22:32:59.259860  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.259867  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:32:59.259876  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:32:59.259912  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:32:59.284492  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:32:59.284508  229026 cri.go:89] found id: ""
	I1119 22:32:59.284515  229026 logs.go:282] 1 containers: [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:32:59.284552  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:32:59.288183  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:32:59.288236  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:32:59.313254  229026 cri.go:89] found id: ""
	I1119 22:32:59.313272  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.313278  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:32:59.313284  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:32:59.313319  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:32:59.341016  229026 cri.go:89] found id: ""
	I1119 22:32:59.341037  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.341046  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:32:59.341061  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:32:59.341074  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1119 22:33:01.511471  252325 config.go:182] Loaded profile config "kubernetes-upgrade-801704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:01.511581  252325 config.go:182] Loaded profile config "no-preload-178067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:01.511685  252325 config.go:182] Loaded profile config "old-k8s-version-680619": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:33:01.511796  252325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:33:01.535299  252325 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:33:01.535408  252325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:01.590414  252325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:01.580981289 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:01.590510  252325 docker.go:319] overlay module found
	I1119 22:33:01.592754  252325 out.go:179] * Using the docker driver based on user configuration
	I1119 22:33:01.593795  252325 start.go:309] selected driver: docker
	I1119 22:33:01.593807  252325 start.go:930] validating driver "docker" against <nil>
	I1119 22:33:01.593848  252325 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:33:01.594432  252325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:01.649608  252325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:01.638967992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:01.649791  252325 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:33:01.650061  252325 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:33:01.651601  252325 out.go:179] * Using Docker driver with root privileges
	I1119 22:33:01.652670  252325 cni.go:84] Creating CNI manager for ""
	I1119 22:33:01.652740  252325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:01.652755  252325 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:33:01.652829  252325 start.go:353] cluster config:
	{Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:01.654179  252325 out.go:179] * Starting "embed-certs-443380" primary control-plane node in "embed-certs-443380" cluster
	I1119 22:33:01.655183  252325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:33:01.656323  252325 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:33:01.657295  252325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:01.657320  252325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:33:01.657327  252325 cache.go:65] Caching tarball of preloaded images
	I1119 22:33:01.657382  252325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:33:01.657413  252325 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:33:01.657427  252325 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:33:01.657527  252325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/config.json ...
	I1119 22:33:01.657547  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/config.json: {Name:mk4297190b4b8789cd79e77fffa134a382aad579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:01.676904  252325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:33:01.676924  252325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:33:01.676943  252325 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:33:01.676967  252325 start.go:360] acquireMachinesLock for embed-certs-443380: {Name:mk45876245c2cf21fce38118b7c82861612c5d41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:33:01.677071  252325 start.go:364] duration metric: took 86.075µs to acquireMachinesLock for "embed-certs-443380"
	I1119 22:33:01.677099  252325 start.go:93] Provisioning new machine with config: &{Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:33:01.677185  252325 start.go:125] createHost starting for "" (driver="docker")
	W1119 22:33:00.606900  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:02.607345  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:32:59.984769  243333 pod_ready.go:104] pod "coredns-5dd5756b68-7bkvq" is not "Ready", error: <nil>
	I1119 22:33:01.984418  243333 pod_ready.go:94] pod "coredns-5dd5756b68-7bkvq" is "Ready"
	I1119 22:33:01.984439  243333 pod_ready.go:86] duration metric: took 32.504951912s for pod "coredns-5dd5756b68-7bkvq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:01.986903  243333 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:01.990522  243333 pod_ready.go:94] pod "etcd-old-k8s-version-680619" is "Ready"
	I1119 22:33:01.990543  243333 pod_ready.go:86] duration metric: took 3.620785ms for pod "etcd-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:01.993127  243333 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:01.997419  243333 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-680619" is "Ready"
	I1119 22:33:01.997438  243333 pod_ready.go:86] duration metric: took 4.2896ms for pod "kube-apiserver-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.000066  243333 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.183791  243333 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-680619" is "Ready"
	I1119 22:33:02.183849  243333 pod_ready.go:86] duration metric: took 183.762211ms for pod "kube-controller-manager-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.384133  243333 pod_ready.go:83] waiting for pod "kube-proxy-4xxp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.783807  243333 pod_ready.go:94] pod "kube-proxy-4xxp4" is "Ready"
	I1119 22:33:02.783841  243333 pod_ready.go:86] duration metric: took 399.678694ms for pod "kube-proxy-4xxp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.983668  243333 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:03.383429  243333 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-680619" is "Ready"
	I1119 22:33:03.383457  243333 pod_ready.go:86] duration metric: took 399.75864ms for pod "kube-scheduler-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:03.383471  243333 pod_ready.go:40] duration metric: took 33.907172866s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:03.437022  243333 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 22:33:03.438652  243333 out.go:203] 
	W1119 22:33:03.439810  243333 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:33:03.440920  243333 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:33:03.442028  243333 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-680619" cluster and "default" namespace by default
	I1119 22:33:01.678767  252325 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:33:01.679004  252325 start.go:159] libmachine.API.Create for "embed-certs-443380" (driver="docker")
	I1119 22:33:01.679037  252325 client.go:173] LocalClient.Create starting
	I1119 22:33:01.679097  252325 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem
	I1119 22:33:01.679132  252325 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:01.679159  252325 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:01.679233  252325 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem
	I1119 22:33:01.679264  252325 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:01.679277  252325 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:01.679589  252325 cli_runner.go:164] Run: docker network inspect embed-certs-443380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:33:01.695556  252325 cli_runner.go:211] docker network inspect embed-certs-443380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:33:01.695639  252325 network_create.go:284] running [docker network inspect embed-certs-443380] to gather additional debugging logs...
	I1119 22:33:01.695659  252325 cli_runner.go:164] Run: docker network inspect embed-certs-443380
	W1119 22:33:01.711968  252325 cli_runner.go:211] docker network inspect embed-certs-443380 returned with exit code 1
	I1119 22:33:01.711999  252325 network_create.go:287] error running [docker network inspect embed-certs-443380]: docker network inspect embed-certs-443380: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-443380 not found
	I1119 22:33:01.712023  252325 network_create.go:289] output of [docker network inspect embed-certs-443380]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-443380 not found
	
	** /stderr **
	I1119 22:33:01.712112  252325 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:01.747083  252325 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cde0f356bd10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b5:fa:ba:e0:a6} reservation:<nil>}
	I1119 22:33:01.747803  252325 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-47fb5ce24a02 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:30:91:0e:d6:d9} reservation:<nil>}
	I1119 22:33:01.748524  252325 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2592199ffac9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:9b:dd:65:07:28} reservation:<nil>}
	I1119 22:33:01.749214  252325 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7d9a9064074d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:c1:e4:50:35:aa} reservation:<nil>}
	I1119 22:33:01.750035  252325 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eee910}
	I1119 22:33:01.750076  252325 network_create.go:124] attempt to create docker network embed-certs-443380 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:33:01.750124  252325 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-443380 embed-certs-443380
	I1119 22:33:01.796065  252325 network_create.go:108] docker network embed-certs-443380 192.168.85.0/24 created
	I1119 22:33:01.796091  252325 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-443380" container
	I1119 22:33:01.796140  252325 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:33:01.812987  252325 cli_runner.go:164] Run: docker volume create embed-certs-443380 --label name.minikube.sigs.k8s.io=embed-certs-443380 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:33:01.830803  252325 oci.go:103] Successfully created a docker volume embed-certs-443380
	I1119 22:33:01.830903  252325 cli_runner.go:164] Run: docker run --rm --name embed-certs-443380-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-443380 --entrypoint /usr/bin/test -v embed-certs-443380:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:33:02.226183  252325 oci.go:107] Successfully prepared a docker volume embed-certs-443380
	I1119 22:33:02.226240  252325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:02.226267  252325 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:33:02.226337  252325 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-443380:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	W1119 22:33:04.607874  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:07.108171  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	I1119 22:33:06.674938  252325 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-443380:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.448558667s)
	I1119 22:33:06.674965  252325 kic.go:203] duration metric: took 4.448710608s to extract preloaded images to volume ...
	W1119 22:33:06.675032  252325 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:33:06.675067  252325 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:33:06.675114  252325 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:33:06.730759  252325 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-443380 --name embed-certs-443380 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-443380 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-443380 --network embed-certs-443380 --ip 192.168.85.2 --volume embed-certs-443380:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:33:07.025074  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Running}}
	I1119 22:33:07.043685  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:07.061043  252325 cli_runner.go:164] Run: docker exec embed-certs-443380 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:33:07.107128  252325 oci.go:144] the created container "embed-certs-443380" has a running status.
	I1119 22:33:07.107156  252325 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa...
	I1119 22:33:07.271265  252325 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:33:07.297185  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:07.319017  252325 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:33:07.319041  252325 kic_runner.go:114] Args: [docker exec --privileged embed-certs-443380 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:33:07.369406  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:07.390718  252325 machine.go:94] provisionDockerMachine start ...
	I1119 22:33:07.390847  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:07.410701  252325 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:07.411073  252325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1119 22:33:07.411096  252325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:33:07.536529  252325 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-443380
	
	I1119 22:33:07.536561  252325 ubuntu.go:182] provisioning hostname "embed-certs-443380"
	I1119 22:33:07.536611  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:07.555612  252325 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:07.555911  252325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1119 22:33:07.555936  252325 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-443380 && echo "embed-certs-443380" | sudo tee /etc/hostname
	I1119 22:33:07.692510  252325 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-443380
	
	I1119 22:33:07.692623  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:07.710692  252325 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:07.710918  252325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1119 22:33:07.710936  252325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-443380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-443380/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-443380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:33:07.833957  252325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:33:07.833985  252325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:33:07.834025  252325 ubuntu.go:190] setting up certificates
	I1119 22:33:07.834039  252325 provision.go:84] configureAuth start
	I1119 22:33:07.834092  252325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:33:07.853762  252325 provision.go:143] copyHostCerts
	I1119 22:33:07.853838  252325 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:33:07.853853  252325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:33:07.853932  252325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:33:07.854052  252325 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:33:07.854064  252325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:33:07.854102  252325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:33:07.854199  252325 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:33:07.854210  252325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:33:07.854249  252325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:33:07.854322  252325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.embed-certs-443380 san=[127.0.0.1 192.168.85.2 embed-certs-443380 localhost minikube]
	I1119 22:33:07.974478  252325 provision.go:177] copyRemoteCerts
	I1119 22:33:07.974531  252325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:33:07.974564  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:07.991932  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.083359  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:33:08.101926  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:33:08.119318  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:33:08.135640  252325 provision.go:87] duration metric: took 301.58365ms to configureAuth
	I1119 22:33:08.135664  252325 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:33:08.135807  252325 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:08.135918  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.153751  252325 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:08.153984  252325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1119 22:33:08.154006  252325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:33:08.414470  252325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:33:08.414501  252325 machine.go:97] duration metric: took 1.023758944s to provisionDockerMachine
	I1119 22:33:08.414515  252325 client.go:176] duration metric: took 6.735469813s to LocalClient.Create
	I1119 22:33:08.414535  252325 start.go:167] duration metric: took 6.735531465s to libmachine.API.Create "embed-certs-443380"
	I1119 22:33:08.414546  252325 start.go:293] postStartSetup for "embed-certs-443380" (driver="docker")
	I1119 22:33:08.414564  252325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:33:08.414662  252325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:33:08.414715  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.431992  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.524868  252325 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:33:08.528237  252325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:33:08.528259  252325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:33:08.528269  252325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:33:08.528312  252325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:33:08.528387  252325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:33:08.528503  252325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:33:08.535937  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:08.554743  252325 start.go:296] duration metric: took 140.180844ms for postStartSetup
	I1119 22:33:08.555113  252325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:33:08.572861  252325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/config.json ...
	I1119 22:33:08.573077  252325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:33:08.573118  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.590191  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.680487  252325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:33:08.685102  252325 start.go:128] duration metric: took 7.007901367s to createHost
	I1119 22:33:08.685126  252325 start.go:83] releasing machines lock for "embed-certs-443380", held for 7.008041998s
	I1119 22:33:08.685186  252325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:33:08.703503  252325 ssh_runner.go:195] Run: cat /version.json
	I1119 22:33:08.703549  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.703592  252325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:33:08.703666  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.721011  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.721943  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.881942  252325 ssh_runner.go:195] Run: systemctl --version
	I1119 22:33:08.888240  252325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:33:08.922369  252325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:33:08.926928  252325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:33:08.926999  252325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:33:08.951143  252325 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:33:08.951165  252325 start.go:496] detecting cgroup driver to use...
	I1119 22:33:08.951195  252325 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:33:08.951235  252325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:33:08.966336  252325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:33:08.977800  252325 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:33:08.977875  252325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:33:08.993074  252325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:33:09.010878  252325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:33:09.089917  252325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:33:09.173998  252325 docker.go:234] disabling docker service ...
	I1119 22:33:09.174060  252325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:33:09.191939  252325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:33:09.203668  252325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:33:09.286446  252325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:33:09.370884  252325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:33:09.382304  252325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:33:09.395806  252325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:33:09.395869  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.405703  252325 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:33:09.405765  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.414903  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.423682  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.432014  252325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:33:09.440550  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.450361  252325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.463183  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.472281  252325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:33:09.479192  252325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:33:09.486205  252325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:09.572673  252325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:33:09.712065  252325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:33:09.712122  252325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:33:09.715745  252325 start.go:564] Will wait 60s for crictl version
	I1119 22:33:09.715788  252325 ssh_runner.go:195] Run: which crictl
	I1119 22:33:09.719102  252325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:33:09.743243  252325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:33:09.743315  252325 ssh_runner.go:195] Run: crio --version
	I1119 22:33:09.771239  252325 ssh_runner.go:195] Run: crio --version
	I1119 22:33:09.798320  252325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:33:09.799411  252325 cli_runner.go:164] Run: docker network inspect embed-certs-443380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:09.818975  252325 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:33:09.822938  252325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:09.833252  252325 kubeadm.go:884] updating cluster {Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:33:09.833383  252325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:09.833437  252325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:09.864567  252325 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:09.864586  252325 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:33:09.864627  252325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:09.888156  252325 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:09.888172  252325 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:33:09.888179  252325 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 22:33:09.888264  252325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-443380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:33:09.888348  252325 ssh_runner.go:195] Run: crio config
	I1119 22:33:09.931564  252325 cni.go:84] Creating CNI manager for ""
	I1119 22:33:09.931589  252325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:09.931609  252325 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:33:09.931634  252325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-443380 NodeName:embed-certs-443380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:33:09.931800  252325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-443380"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:33:09.931876  252325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:33:09.939707  252325 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:33:09.939772  252325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:33:09.947057  252325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 22:33:09.958831  252325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:33:09.972980  252325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:33:09.984521  252325 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:33:09.987795  252325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:09.996810  252325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:10.072984  252325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:33:10.095866  252325 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380 for IP: 192.168.85.2
	I1119 22:33:10.095886  252325 certs.go:195] generating shared ca certs ...
	I1119 22:33:10.095901  252325 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.096020  252325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:33:10.096071  252325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:33:10.096083  252325 certs.go:257] generating profile certs ...
	I1119 22:33:10.096132  252325 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.key
	I1119 22:33:10.096149  252325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.crt with IP's: []
	I1119 22:33:10.283329  252325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.crt ...
	I1119 22:33:10.283353  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.crt: {Name:mkbf7ad9fcf142ca89ca73eee96635beed02dbb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.283521  252325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.key ...
	I1119 22:33:10.283539  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.key: {Name:mk187744faba3bdf35a617d99549b829a4312db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.283621  252325 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78
	I1119 22:33:10.283635  252325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt.8b1e4b78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:33:10.617647  252325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt.8b1e4b78 ...
	I1119 22:33:10.617670  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt.8b1e4b78: {Name:mk6a8b92fbdf38f1d80b191920927bfc89cab752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.617810  252325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78 ...
	I1119 22:33:10.617831  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78: {Name:mk3e1c4c577d13edada9e089fe5ea5d95f8f8e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.617903  252325 certs.go:382] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt.8b1e4b78 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt
	I1119 22:33:10.617990  252325 certs.go:386] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key
	I1119 22:33:10.618051  252325 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key
	I1119 22:33:10.618066  252325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt with IP's: []
	I1119 22:33:10.666118  252325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt ...
	I1119 22:33:10.666141  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt: {Name:mk6f829de03d54e483844ba54310472124343694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.666274  252325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key ...
	I1119 22:33:10.666286  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key: {Name:mk979cb89f953ae262da4eba61240efab10eb0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.666454  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:33:10.666486  252325 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:33:10.666508  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:33:10.666534  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:33:10.666555  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:33:10.666575  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:33:10.666612  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:10.667149  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:33:10.685354  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:33:10.701745  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:33:10.717586  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:33:10.733709  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 22:33:10.750064  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:33:10.766215  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:33:10.783123  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:33:10.800167  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:33:10.820851  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:33:10.837560  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:33:10.853839  252325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:33:10.865865  252325 ssh_runner.go:195] Run: openssl version
	I1119 22:33:10.871617  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:33:10.879308  252325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:33:10.882566  252325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:33:10.882615  252325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:33:10.916836  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:33:10.924469  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:33:10.932228  252325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:33:10.935677  252325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:33:10.935714  252325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:33:10.971166  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:33:10.978907  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:33:10.986645  252325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:10.990018  252325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:10.990059  252325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:11.023660  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:33:11.031967  252325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:33:11.035349  252325 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:33:11.035406  252325 kubeadm.go:401] StartCluster: {Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:11.035484  252325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:33:11.035538  252325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:33:11.061851  252325 cri.go:89] found id: ""
	I1119 22:33:11.061906  252325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:33:11.069375  252325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:33:11.076947  252325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:33:11.076988  252325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:33:11.084427  252325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:33:11.084443  252325 kubeadm.go:158] found existing configuration files:
	
	I1119 22:33:11.084483  252325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:33:11.091939  252325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:33:11.091982  252325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:33:11.099484  252325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:33:11.107042  252325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:33:11.107089  252325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:33:11.113840  252325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:33:11.120960  252325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:33:11.121002  252325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:33:11.127563  252325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:33:11.134481  252325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:33:11.134519  252325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:33:11.141082  252325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:33:11.179250  252325 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:33:11.179314  252325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:33:11.199591  252325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:33:11.199695  252325 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:33:11.199759  252325 kubeadm.go:319] OS: Linux
	I1119 22:33:11.199809  252325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:33:11.199884  252325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:33:11.199982  252325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:33:11.200064  252325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:33:11.200146  252325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:33:11.200187  252325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:33:11.200237  252325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:33:11.200278  252325 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:33:11.255487  252325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:33:11.255632  252325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:33:11.255770  252325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:33:11.263031  252325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:33:11.265108  252325 out.go:252]   - Generating certificates and keys ...
	I1119 22:33:11.265194  252325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:33:11.265276  252325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:33:09.395630  229026 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054534132s)
	W1119 22:33:09.395669  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1119 22:33:09.395678  229026 logs.go:123] Gathering logs for kube-apiserver [49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3] ...
	I1119 22:33:09.395691  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3"
	I1119 22:33:09.426003  229026 logs.go:123] Gathering logs for kube-controller-manager [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a] ...
	I1119 22:33:09.426025  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:09.451373  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:09.451393  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:09.494706  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:09.494728  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:09.530478  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:09.530513  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:09.561557  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:09.561584  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:09.608153  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:09.608189  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:09.687781  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:09.687824  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1119 22:33:09.607025  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:11.607513  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	I1119 22:33:11.514090  252325 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:33:12.150795  252325 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:33:12.437161  252325 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:33:12.523242  252325 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:33:12.591260  252325 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:33:12.591495  252325 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-443380 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:33:12.659650  252325 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:33:12.659845  252325 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-443380 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:33:12.829884  252325 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:33:13.186082  252325 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:33:13.413475  252325 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:33:13.413714  252325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:33:13.448006  252325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:33:13.575106  252325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:33:13.891287  252325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:33:14.006907  252325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:33:14.451405  252325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:33:14.451998  252325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:33:14.456348  252325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:33:14.458346  252325 out.go:252]   - Booting up control plane ...
	I1119 22:33:14.458469  252325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:33:14.458576  252325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:33:14.459419  252325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:33:14.486797  252325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:33:14.486991  252325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:33:14.493486  252325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:33:14.493708  252325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:33:14.493776  252325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:33:14.595843  252325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:33:14.595995  252325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:33:15.097121  252325 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.37281ms
	I1119 22:33:15.100255  252325 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:33:15.100358  252325 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:33:15.100463  252325 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:33:15.100563  252325 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:33:12.202183  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:12.928507  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": read tcp 192.168.94.1:38244->192.168.94.2:8443: read: connection reset by peer
	I1119 22:33:12.928582  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:12.928646  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:12.956386  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:12.956405  229026 cri.go:89] found id: "49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3"
	I1119 22:33:12.956409  229026 cri.go:89] found id: ""
	I1119 22:33:12.956416  229026 logs.go:282] 2 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f 49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3]
	I1119 22:33:12.956464  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:12.960303  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:12.963926  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:12.963980  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:12.990072  229026 cri.go:89] found id: ""
	I1119 22:33:12.990099  229026 logs.go:282] 0 containers: []
	W1119 22:33:12.990107  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:12.990114  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:12.990172  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:13.014444  229026 cri.go:89] found id: ""
	I1119 22:33:13.014466  229026 logs.go:282] 0 containers: []
	W1119 22:33:13.014476  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:13.014483  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:13.014524  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:13.042799  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:13.042832  229026 cri.go:89] found id: ""
	I1119 22:33:13.042843  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:13.042892  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:13.047157  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:13.047219  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:13.077530  229026 cri.go:89] found id: ""
	I1119 22:33:13.077554  229026 logs.go:282] 0 containers: []
	W1119 22:33:13.077563  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:13.077570  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:13.077628  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:13.104431  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:13.104454  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:13.104460  229026 cri.go:89] found id: ""
	I1119 22:33:13.104469  229026 logs.go:282] 2 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:33:13.104520  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:13.108896  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:13.112444  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:13.112496  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:13.137880  229026 cri.go:89] found id: ""
	I1119 22:33:13.137898  229026 logs.go:282] 0 containers: []
	W1119 22:33:13.137905  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:13.137912  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:13.137958  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:13.164525  229026 cri.go:89] found id: ""
	I1119 22:33:13.164547  229026 logs.go:282] 0 containers: []
	W1119 22:33:13.164557  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:13.164573  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:13.164585  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:13.206975  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:13.207002  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:13.235928  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:13.235949  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:13.289992  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:13.290010  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:13.290037  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:13.321096  229026 logs.go:123] Gathering logs for kube-apiserver [49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3] ...
	I1119 22:33:13.321120  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3"
	I1119 22:33:13.352184  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:13.352209  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:13.397767  229026 logs.go:123] Gathering logs for kube-controller-manager [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a] ...
	I1119 22:33:13.397792  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:13.422753  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:13.422777  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:13.502340  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:13.502369  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:13.516857  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:13.516884  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:16.043246  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:16.043702  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:16.043773  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:16.043844  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:16.078907  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:16.078931  229026 cri.go:89] found id: ""
	I1119 22:33:16.078942  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:16.078995  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:16.083927  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:16.083983  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:16.117131  229026 cri.go:89] found id: ""
	I1119 22:33:16.117154  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.117164  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:16.117171  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:16.117237  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:16.150229  229026 cri.go:89] found id: ""
	I1119 22:33:16.150254  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.150264  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:16.150272  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:16.150332  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:16.179213  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:16.179309  229026 cri.go:89] found id: ""
	I1119 22:33:16.179320  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:16.179377  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:16.184121  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:16.184179  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:16.214359  229026 cri.go:89] found id: ""
	I1119 22:33:16.214411  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.214425  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:16.214433  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:16.214481  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:16.241647  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:16.241671  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:16.241676  229026 cri.go:89] found id: ""
	I1119 22:33:16.241685  229026 logs.go:282] 2 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:33:16.241732  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:16.245495  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:16.249060  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:16.249112  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:16.275319  229026 cri.go:89] found id: ""
	I1119 22:33:16.275343  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.275352  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:16.275360  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:16.275412  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:16.302515  229026 cri.go:89] found id: ""
	I1119 22:33:16.302536  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.302546  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:16.302561  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:16.302576  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:16.360426  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:16.360447  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:16.360461  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:16.393052  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:16.393077  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:16.439523  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:16.439547  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:16.466536  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:16.466558  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:16.496269  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:16.496294  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> CRI-O <==
	Nov 19 22:32:46 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:46.905274359Z" level=info msg="Created container bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv/kubernetes-dashboard" id=fd20536a-bbdc-41f6-b2ef-0a00b4d16f6e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:32:46 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:46.905779366Z" level=info msg="Starting container: bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22" id=3d8989b6-2c87-4973-a07c-21f023c5e493 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:32:46 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:46.907523072Z" level=info msg="Started container" PID=1757 containerID=bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv/kubernetes-dashboard id=3d8989b6-2c87-4973-a07c-21f023c5e493 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fce467e2331d0ba19f583d23ddcaddb494c73e31bb071e307004f956f244e8f0
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.442176966Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a385d1e3-4acd-44d7-8687-22acd3a05d7a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.443002524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a941a876-dcc6-4001-9386-d0665ac5a57d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.443993603Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=efe0841f-6834-4b62-bf55-c76d93e75ca9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.444103323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.448271828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.448440588Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/19e5e30eccdf59e32488686de5ffa10c2f74cc958c2de7b46e978ebbceee2c2d/merged/etc/passwd: no such file or directory"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.448471027Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/19e5e30eccdf59e32488686de5ffa10c2f74cc958c2de7b46e978ebbceee2c2d/merged/etc/group: no such file or directory"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.448722025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.475210138Z" level=info msg="Created container 73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b: kube-system/storage-provisioner/storage-provisioner" id=efe0841f-6834-4b62-bf55-c76d93e75ca9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.47564428Z" level=info msg="Starting container: 73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b" id=528f98b4-df58-4bf0-902c-d55a1a722d50 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.477375821Z" level=info msg="Started container" PID=1780 containerID=73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b description=kube-system/storage-provisioner/storage-provisioner id=528f98b4-df58-4bf0-902c-d55a1a722d50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3605f662dd55f24b8446f72ed24f2a97edc0868fa4f35f3bc26074e8fd22b37
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.348126131Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4bb14b9c-fe17-4a05-a337-04ee6b52f029 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.349150787Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=87485864-9c19-4bb4-980d-58dafa2bf71c name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.350274669Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6/dashboard-metrics-scraper" id=7b9a80df-bc5f-4468-8f64-fd82c49d9e3a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.350407024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.357714761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.358406903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.390133516Z" level=info msg="Created container 81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6/dashboard-metrics-scraper" id=7b9a80df-bc5f-4468-8f64-fd82c49d9e3a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.39080061Z" level=info msg="Starting container: 81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a" id=8d5ba1c9-f232-4d39-8e21-668a9836c665 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.392801137Z" level=info msg="Started container" PID=1799 containerID=81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6/dashboard-metrics-scraper id=8d5ba1c9-f232-4d39-8e21-668a9836c665 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e56d3bb7a822f016b06026f48d31d7f7539cf62479007fe218cee1adad807a5
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.453810377Z" level=info msg="Removing container: 452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1" id=cd664628-8c59-4e66-8023-05d9af26486e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.464789358Z" level=info msg="Removed container 452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6/dashboard-metrics-scraper" id=cd664628-8c59-4e66-8023-05d9af26486e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	81d3c1d628f85       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   8e56d3bb7a822       dashboard-metrics-scraper-5f989dc9cf-qbkv6       kubernetes-dashboard
	73d881498864d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   a3605f662dd55       storage-provisioner                              kube-system
	bd5fb08be8644       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   fce467e2331d0       kubernetes-dashboard-8694d4445c-gv4nv            kubernetes-dashboard
	c95f5a9eb7aae       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   68771cbb95217       busybox                                          default
	5eb7bc276d9ed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   2bb9164e8eb3d       coredns-5dd5756b68-7bkvq                         kube-system
	414de1bfb6aec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   a3605f662dd55       storage-provisioner                              kube-system
	631bdbae35af1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   b1fd22c19999b       kindnet-mf7gh                                    kube-system
	60edd37d9535b       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   ae4dc1e276392       kube-proxy-4xxp4                                 kube-system
	89080922c0159       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   a4eca8069d7ce       kube-apiserver-old-k8s-version-680619            kube-system
	8f84773f44821       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   7102a0927a307       kube-controller-manager-old-k8s-version-680619   kube-system
	b26645bb06793       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   9b4fa65b6a4c9       kube-scheduler-old-k8s-version-680619            kube-system
	b40d5aa13f158       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   5f130fbaf2759       etcd-old-k8s-version-680619                      kube-system
	
	
	==> coredns [5eb7bc276d9ede83cf7f9707c5d154ff245634aef28f4966849644db5a50f3a7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53410 - 7682 "HINFO IN 4240874763406610744.8842321771112109544. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048473784s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-680619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-680619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-680619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_31_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:31:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-680619
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:33:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:32:59 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:32:59 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:32:59 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:32:59 +0000   Wed, 19 Nov 2025 22:31:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-680619
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                58ea2120-251a-483f-9bb0-1cfccac1ceba
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-7bkvq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-old-k8s-version-680619                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-mf7gh                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-old-k8s-version-680619             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-old-k8s-version-680619    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-4xxp4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-old-k8s-version-680619             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-qbkv6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gv4nv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node old-k8s-version-680619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           102s               node-controller  Node old-k8s-version-680619 event: Registered Node old-k8s-version-680619 in Controller
	  Normal  NodeReady                88s                kubelet          Node old-k8s-version-680619 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node old-k8s-version-680619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)  kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-680619 event: Registered Node old-k8s-version-680619 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [b40d5aa13f1581f4d75fa92e103d0cc9932c695d82287850952ad9cce1d98ba5] <==
	{"level":"info","ts":"2025-11-19T22:32:25.903968Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:32:25.903982Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:32:25.904157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T22:32:25.904237Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T22:32:25.904341Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:32:25.904382Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:32:25.906846Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T22:32:25.906901Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:32:25.90807Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:32:25.908355Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T22:32:25.908762Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T22:32:27.095574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T22:32:27.09562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:32:27.095648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:32:27.095662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T22:32:27.095667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T22:32:27.095674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-19T22:32:27.095685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T22:32:27.096685Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-680619 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:32:27.096697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:32:27.096725Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:32:27.096902Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:32:27.09693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:32:27.097987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-19T22:32:27.097988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:33:18 up  1:15,  0 user,  load average: 2.64, 2.73, 1.80
	Linux old-k8s-version-680619 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [631bdbae35af1a0fab26aaa35346ef686031049223c980f1c1523d8c16183109] <==
	I1119 22:32:28.872579       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:32:28.872790       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:32:28.872939       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:32:28.872955       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:32:28.872979       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:32:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:32:29.076257       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:32:29.076302       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:32:29.076317       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:32:29.076471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:32:29.567956       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:32:29.567979       1 metrics.go:72] Registering metrics
	I1119 22:32:29.568054       1 controller.go:711] "Syncing nftables rules"
	I1119 22:32:39.081934       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:32:39.081981       1 main.go:301] handling current node
	I1119 22:32:49.076397       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:32:49.076436       1 main.go:301] handling current node
	I1119 22:32:59.076294       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:32:59.076324       1 main.go:301] handling current node
	I1119 22:33:09.078781       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:33:09.078832       1 main.go:301] handling current node
	
	
	==> kube-apiserver [89080922c0159e21e61091b24b9351b5cb28d703c1ed3ad99034c55326191766] <==
	I1119 22:32:28.027440       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:32:28.047723       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:32:28.092945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 22:32:28.093024       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 22:32:28.093043       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 22:32:28.093144       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:32:28.093165       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:32:28.093210       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 22:32:28.093244       1 aggregator.go:166] initial CRD sync complete...
	I1119 22:32:28.093253       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 22:32:28.093258       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:32:28.093265       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:32:28.093357       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1119 22:32:28.098941       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:32:28.836727       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 22:32:28.862316       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:32:28.877332       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:32:28.883434       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:32:28.890988       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:32:28.925972       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.86.19"}
	I1119 22:32:28.937849       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.200.168"}
	I1119 22:32:28.998294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:32:40.257102       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:32:40.279625       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:32:40.287414       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8f84773f448215b180ee3539cd8a463b1872e20afd8aa7857fae9f872b39a9c0] <==
	I1119 22:32:40.282686       1 shared_informer.go:318] Caches are synced for cronjob
	I1119 22:32:40.292014       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1119 22:32:40.292299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.828496ms"
	I1119 22:32:40.292394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.682µs"
	I1119 22:32:40.294633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.415385ms"
	I1119 22:32:40.294730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.524µs"
	I1119 22:32:40.298619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.896µs"
	I1119 22:32:40.345084       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1119 22:32:40.347325       1 shared_informer.go:318] Caches are synced for job
	I1119 22:32:40.359612       1 shared_informer.go:318] Caches are synced for persistent volume
	I1119 22:32:40.374226       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:32:40.410721       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:32:40.463422       1 shared_informer.go:318] Caches are synced for HPA
	I1119 22:32:40.777128       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:32:40.777159       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:32:40.784273       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:32:43.409720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.297µs"
	I1119 22:32:44.414785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="140.986µs"
	I1119 22:32:45.417946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="129.857µs"
	I1119 22:32:47.431019       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.404981ms"
	I1119 22:32:47.431101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.843µs"
	I1119 22:33:01.732121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.535111ms"
	I1119 22:33:01.732274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.258µs"
	I1119 22:33:02.463115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.275µs"
	I1119 22:33:10.588094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.612µs"
	
	
	==> kube-proxy [60edd37d9535be6816f9e4f45d547b93a9514cd2c28698c56bf7d909151f9696] <==
	I1119 22:32:28.743853       1 server_others.go:69] "Using iptables proxy"
	I1119 22:32:28.752754       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 22:32:28.771359       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:32:28.773633       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:32:28.773668       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:32:28.773677       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:32:28.773710       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:32:28.774068       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:32:28.774088       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:32:28.775514       1 config.go:188] "Starting service config controller"
	I1119 22:32:28.776398       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:32:28.775623       1 config.go:315] "Starting node config controller"
	I1119 22:32:28.776497       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:32:28.776100       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:32:28.776545       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:32:28.877358       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 22:32:28.877393       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:32:28.877411       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b26645bb067934e3f245a0dc0ee3200d5ec7b936438cf91b80afef3be85e62af] <==
	I1119 22:32:26.256886       1 serving.go:348] Generated self-signed cert in-memory
	I1119 22:32:28.046318       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1119 22:32:28.046339       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:32:28.050087       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1119 22:32:28.050112       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1119 22:32:28.050120       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:32:28.050133       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 22:32:28.050161       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:32:28.050184       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 22:32:28.051037       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1119 22:32:28.051345       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1119 22:32:28.150931       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 22:32:28.150944       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1119 22:32:28.150931       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:32:40 old-k8s-version-680619 kubelet[742]: I1119 22:32:40.387036     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/742e4e38-0bcd-405e-8b42-aa37e875d6b6-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-gv4nv\" (UID: \"742e4e38-0bcd-405e-8b42-aa37e875d6b6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv"
	Nov 19 22:32:40 old-k8s-version-680619 kubelet[742]: I1119 22:32:40.387096     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jq9n\" (UniqueName: \"kubernetes.io/projected/742e4e38-0bcd-405e-8b42-aa37e875d6b6-kube-api-access-5jq9n\") pod \"kubernetes-dashboard-8694d4445c-gv4nv\" (UID: \"742e4e38-0bcd-405e-8b42-aa37e875d6b6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv"
	Nov 19 22:32:40 old-k8s-version-680619 kubelet[742]: I1119 22:32:40.387183     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pht\" (UniqueName: \"kubernetes.io/projected/98c7d01b-bf85-4e98-b193-c023a7d173da-kube-api-access-t5pht\") pod \"dashboard-metrics-scraper-5f989dc9cf-qbkv6\" (UID: \"98c7d01b-bf85-4e98-b193-c023a7d173da\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6"
	Nov 19 22:32:40 old-k8s-version-680619 kubelet[742]: I1119 22:32:40.387237     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98c7d01b-bf85-4e98-b193-c023a7d173da-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-qbkv6\" (UID: \"98c7d01b-bf85-4e98-b193-c023a7d173da\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6"
	Nov 19 22:32:43 old-k8s-version-680619 kubelet[742]: I1119 22:32:43.398148     742 scope.go:117] "RemoveContainer" containerID="14802135cedc5baccdefb683aaae6d5e500cddbb637863c78c0d85a34ddfffd6"
	Nov 19 22:32:44 old-k8s-version-680619 kubelet[742]: I1119 22:32:44.402115     742 scope.go:117] "RemoveContainer" containerID="14802135cedc5baccdefb683aaae6d5e500cddbb637863c78c0d85a34ddfffd6"
	Nov 19 22:32:44 old-k8s-version-680619 kubelet[742]: I1119 22:32:44.402423     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:32:44 old-k8s-version-680619 kubelet[742]: E1119 22:32:44.402806     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:32:45 old-k8s-version-680619 kubelet[742]: I1119 22:32:45.406157     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:32:45 old-k8s-version-680619 kubelet[742]: E1119 22:32:45.406524     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:32:47 old-k8s-version-680619 kubelet[742]: I1119 22:32:47.424681     742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv" podStartSLOduration=1.15831161 podCreationTimestamp="2025-11-19 22:32:40 +0000 UTC" firstStartedPulling="2025-11-19 22:32:40.604359479 +0000 UTC m=+15.347348323" lastFinishedPulling="2025-11-19 22:32:46.870669637 +0000 UTC m=+21.613658481" observedRunningTime="2025-11-19 22:32:47.424391276 +0000 UTC m=+22.167380129" watchObservedRunningTime="2025-11-19 22:32:47.424621768 +0000 UTC m=+22.167610619"
	Nov 19 22:32:50 old-k8s-version-680619 kubelet[742]: I1119 22:32:50.578641     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:32:50 old-k8s-version-680619 kubelet[742]: E1119 22:32:50.578928     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:32:59 old-k8s-version-680619 kubelet[742]: I1119 22:32:59.441724     742 scope.go:117] "RemoveContainer" containerID="414de1bfb6aec59dd4d86bf9f2fea33a808f25024113c43be7b1c30c813216b0"
	Nov 19 22:33:02 old-k8s-version-680619 kubelet[742]: I1119 22:33:02.347373     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:33:02 old-k8s-version-680619 kubelet[742]: I1119 22:33:02.452552     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:33:02 old-k8s-version-680619 kubelet[742]: I1119 22:33:02.452770     742 scope.go:117] "RemoveContainer" containerID="81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a"
	Nov 19 22:33:02 old-k8s-version-680619 kubelet[742]: E1119 22:33:02.453171     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:33:10 old-k8s-version-680619 kubelet[742]: I1119 22:33:10.578477     742 scope.go:117] "RemoveContainer" containerID="81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a"
	Nov 19 22:33:10 old-k8s-version-680619 kubelet[742]: E1119 22:33:10.578756     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:33:15 old-k8s-version-680619 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:33:15 old-k8s-version-680619 kubelet[742]: I1119 22:33:15.492667     742 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 19 22:33:15 old-k8s-version-680619 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:33:15 old-k8s-version-680619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:33:15 old-k8s-version-680619 systemd[1]: kubelet.service: Consumed 1.336s CPU time.
	
	
	==> kubernetes-dashboard [bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22] <==
	2025/11/19 22:32:46 Using namespace: kubernetes-dashboard
	2025/11/19 22:32:46 Using in-cluster config to connect to apiserver
	2025/11/19 22:32:46 Using secret token for csrf signing
	2025/11/19 22:32:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:32:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:32:46 Successful initial request to the apiserver, version: v1.28.0
	2025/11/19 22:32:46 Generating JWE encryption key
	2025/11/19 22:32:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:32:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:32:47 Initializing JWE encryption key from synchronized object
	2025/11/19 22:32:47 Creating in-cluster Sidecar client
	2025/11/19 22:32:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:32:47 Serving insecurely on HTTP port: 9090
	2025/11/19 22:33:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:32:46 Starting overwatch
	
	
	==> storage-provisioner [414de1bfb6aec59dd4d86bf9f2fea33a808f25024113c43be7b1c30c813216b0] <==
	I1119 22:32:28.716117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:32:58.718313       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b] <==
	I1119 22:32:59.489249       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:32:59.495964       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:32:59.496002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:33:16.891556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:33:16.891721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680619_53e05059-dfbc-43f7-af5e-f3950d689b7d!
	I1119 22:33:16.891696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa8da102-18e8-4e00-96cc-7642d9f355a2", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-680619_53e05059-dfbc-43f7-af5e-f3950d689b7d became leader
	I1119 22:33:16.992496       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680619_53e05059-dfbc-43f7-af5e-f3950d689b7d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680619 -n old-k8s-version-680619
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680619 -n old-k8s-version-680619: exit status 2 (330.602737ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-680619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-680619
helpers_test.go:243: (dbg) docker inspect old-k8s-version-680619:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919",
	        "Created": "2025-11-19T22:31:10.323294154Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 243531,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:32:19.395015357Z",
	            "FinishedAt": "2025-11-19T22:32:18.544824931Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/hostname",
	        "HostsPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/hosts",
	        "LogPath": "/var/lib/docker/containers/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919/08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919-json.log",
	        "Name": "/old-k8s-version-680619",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-680619:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-680619",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08365271d4a40bb79310d316f4ac980c1edd6b1d69701be90ec376c6c2974919",
	                "LowerDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d051ee79cf99bebb3106f63a795d2de9d9c603c6888b47d14e37e908cc4d8ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-680619",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-680619/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-680619",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-680619",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-680619",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1356c24c10d3ffe8c88ef299c6d1288cd3d6953d2434bbd08bc1e77831e86e03",
	            "SandboxKey": "/var/run/docker/netns/1356c24c10d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-680619": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d9a9064074d1b313e6d7afbded8c0b7d9aaeb41b178a1f248c1547e69e77bbc",
	                    "EndpointID": "c7bd25fb5f5055fb189e9c8af21547e98d61a73f7e3d735c8db895bb4fbd59d0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "5a:48:7c:94:d0:0b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-680619",
	                        "08365271d4a4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680619 -n old-k8s-version-680619
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680619 -n old-k8s-version-680619: exit status 2 (347.300827ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-680619 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-680619 logs -n 25: (1.144187268s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p missing-upgrade-015670 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-015670    │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ stop    │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p NoKubernetes-662839 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ stop    │ -p kubernetes-upgrade-801704                                                                                                                                                                                                                  │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:30 UTC │
	│ start   │ -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-801704 │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839       │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ delete  │ -p missing-upgrade-015670                                                                                                                                                                                                                     │ missing-upgrade-015670    │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p old-k8s-version-680619 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p no-preload-178067 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p no-preload-178067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067         │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-855818    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818    │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380        │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619    │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:33:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:33:01.497334  252325 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:33:01.497583  252325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:01.497591  252325 out.go:374] Setting ErrFile to fd 2...
	I1119 22:33:01.497595  252325 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:01.497760  252325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:33:01.498196  252325 out.go:368] Setting JSON to false
	I1119 22:33:01.499292  252325 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4529,"bootTime":1763587052,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:33:01.499361  252325 start.go:143] virtualization: kvm guest
	I1119 22:33:01.501379  252325 out.go:179] * [embed-certs-443380] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:33:01.504213  252325 notify.go:221] Checking for updates...
	I1119 22:33:01.504227  252325 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:33:01.505367  252325 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:33:01.506547  252325 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:33:01.507637  252325 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:33:01.508761  252325 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:33:01.509901  252325 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:32:59.120306  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:32:59.120361  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:32:59.120420  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:32:59.146689  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:32:59.146706  229026 cri.go:89] found id: "49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3"
	I1119 22:32:59.146710  229026 cri.go:89] found id: ""
	I1119 22:32:59.146717  229026 logs.go:282] 2 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f 49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3]
	I1119 22:32:59.146767  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:32:59.150639  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:32:59.154354  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:32:59.154408  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:32:59.179641  229026 cri.go:89] found id: ""
	I1119 22:32:59.179658  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.179664  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:32:59.179670  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:32:59.179715  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:32:59.205860  229026 cri.go:89] found id: ""
	I1119 22:32:59.205880  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.205889  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:32:59.205896  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:32:59.205938  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:32:59.232101  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:32:59.232121  229026 cri.go:89] found id: ""
	I1119 22:32:59.232130  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:32:59.232174  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:32:59.235754  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:32:59.235805  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:32:59.259838  229026 cri.go:89] found id: ""
	I1119 22:32:59.259860  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.259867  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:32:59.259876  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:32:59.259912  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:32:59.284492  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:32:59.284508  229026 cri.go:89] found id: ""
	I1119 22:32:59.284515  229026 logs.go:282] 1 containers: [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:32:59.284552  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:32:59.288183  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:32:59.288236  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:32:59.313254  229026 cri.go:89] found id: ""
	I1119 22:32:59.313272  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.313278  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:32:59.313284  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:32:59.313319  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:32:59.341016  229026 cri.go:89] found id: ""
	I1119 22:32:59.341037  229026 logs.go:282] 0 containers: []
	W1119 22:32:59.341046  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:32:59.341061  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:32:59.341074  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1119 22:33:01.511471  252325 config.go:182] Loaded profile config "kubernetes-upgrade-801704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:01.511581  252325 config.go:182] Loaded profile config "no-preload-178067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:01.511685  252325 config.go:182] Loaded profile config "old-k8s-version-680619": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1119 22:33:01.511796  252325 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:33:01.535299  252325 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:33:01.535408  252325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:01.590414  252325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:01.580981289 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:01.590510  252325 docker.go:319] overlay module found
	I1119 22:33:01.592754  252325 out.go:179] * Using the docker driver based on user configuration
	I1119 22:33:01.593795  252325 start.go:309] selected driver: docker
	I1119 22:33:01.593807  252325 start.go:930] validating driver "docker" against <nil>
	I1119 22:33:01.593848  252325 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:33:01.594432  252325 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:01.649608  252325 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:01.638967992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:01.649791  252325 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:33:01.650061  252325 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:33:01.651601  252325 out.go:179] * Using Docker driver with root privileges
	I1119 22:33:01.652670  252325 cni.go:84] Creating CNI manager for ""
	I1119 22:33:01.652740  252325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:01.652755  252325 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:33:01.652829  252325 start.go:353] cluster config:
	{Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:01.654179  252325 out.go:179] * Starting "embed-certs-443380" primary control-plane node in "embed-certs-443380" cluster
	I1119 22:33:01.655183  252325 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:33:01.656323  252325 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:33:01.657295  252325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:01.657320  252325 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:33:01.657327  252325 cache.go:65] Caching tarball of preloaded images
	I1119 22:33:01.657382  252325 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:33:01.657413  252325 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:33:01.657427  252325 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:33:01.657527  252325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/config.json ...
	I1119 22:33:01.657547  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/config.json: {Name:mk4297190b4b8789cd79e77fffa134a382aad579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:01.676904  252325 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:33:01.676924  252325 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:33:01.676943  252325 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:33:01.676967  252325 start.go:360] acquireMachinesLock for embed-certs-443380: {Name:mk45876245c2cf21fce38118b7c82861612c5d41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:33:01.677071  252325 start.go:364] duration metric: took 86.075µs to acquireMachinesLock for "embed-certs-443380"
	I1119 22:33:01.677099  252325 start.go:93] Provisioning new machine with config: &{Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:33:01.677185  252325 start.go:125] createHost starting for "" (driver="docker")
	W1119 22:33:00.606900  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:02.607345  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:32:59.984769  243333 pod_ready.go:104] pod "coredns-5dd5756b68-7bkvq" is not "Ready", error: <nil>
	I1119 22:33:01.984418  243333 pod_ready.go:94] pod "coredns-5dd5756b68-7bkvq" is "Ready"
	I1119 22:33:01.984439  243333 pod_ready.go:86] duration metric: took 32.504951912s for pod "coredns-5dd5756b68-7bkvq" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:01.986903  243333 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:01.990522  243333 pod_ready.go:94] pod "etcd-old-k8s-version-680619" is "Ready"
	I1119 22:33:01.990543  243333 pod_ready.go:86] duration metric: took 3.620785ms for pod "etcd-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:01.993127  243333 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:01.997419  243333 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-680619" is "Ready"
	I1119 22:33:01.997438  243333 pod_ready.go:86] duration metric: took 4.2896ms for pod "kube-apiserver-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.000066  243333 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.183791  243333 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-680619" is "Ready"
	I1119 22:33:02.183849  243333 pod_ready.go:86] duration metric: took 183.762211ms for pod "kube-controller-manager-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.384133  243333 pod_ready.go:83] waiting for pod "kube-proxy-4xxp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.783807  243333 pod_ready.go:94] pod "kube-proxy-4xxp4" is "Ready"
	I1119 22:33:02.783841  243333 pod_ready.go:86] duration metric: took 399.678694ms for pod "kube-proxy-4xxp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:02.983668  243333 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:03.383429  243333 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-680619" is "Ready"
	I1119 22:33:03.383457  243333 pod_ready.go:86] duration metric: took 399.75864ms for pod "kube-scheduler-old-k8s-version-680619" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:03.383471  243333 pod_ready.go:40] duration metric: took 33.907172866s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:03.437022  243333 start.go:628] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1119 22:33:03.438652  243333 out.go:203] 
	W1119 22:33:03.439810  243333 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1119 22:33:03.440920  243333 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1119 22:33:03.442028  243333 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-680619" cluster and "default" namespace by default
	I1119 22:33:01.678767  252325 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:33:01.679004  252325 start.go:159] libmachine.API.Create for "embed-certs-443380" (driver="docker")
	I1119 22:33:01.679037  252325 client.go:173] LocalClient.Create starting
	I1119 22:33:01.679097  252325 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem
	I1119 22:33:01.679132  252325 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:01.679159  252325 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:01.679233  252325 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem
	I1119 22:33:01.679264  252325 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:01.679277  252325 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:01.679589  252325 cli_runner.go:164] Run: docker network inspect embed-certs-443380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:33:01.695556  252325 cli_runner.go:211] docker network inspect embed-certs-443380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:33:01.695639  252325 network_create.go:284] running [docker network inspect embed-certs-443380] to gather additional debugging logs...
	I1119 22:33:01.695659  252325 cli_runner.go:164] Run: docker network inspect embed-certs-443380
	W1119 22:33:01.711968  252325 cli_runner.go:211] docker network inspect embed-certs-443380 returned with exit code 1
	I1119 22:33:01.711999  252325 network_create.go:287] error running [docker network inspect embed-certs-443380]: docker network inspect embed-certs-443380: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-443380 not found
	I1119 22:33:01.712023  252325 network_create.go:289] output of [docker network inspect embed-certs-443380]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-443380 not found
	
	** /stderr **
	I1119 22:33:01.712112  252325 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:01.747083  252325 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cde0f356bd10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b5:fa:ba:e0:a6} reservation:<nil>}
	I1119 22:33:01.747803  252325 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-47fb5ce24a02 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:30:91:0e:d6:d9} reservation:<nil>}
	I1119 22:33:01.748524  252325 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2592199ffac9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:9b:dd:65:07:28} reservation:<nil>}
	I1119 22:33:01.749214  252325 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7d9a9064074d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:c1:e4:50:35:aa} reservation:<nil>}
	I1119 22:33:01.750035  252325 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eee910}
	I1119 22:33:01.750076  252325 network_create.go:124] attempt to create docker network embed-certs-443380 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1119 22:33:01.750124  252325 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-443380 embed-certs-443380
	I1119 22:33:01.796065  252325 network_create.go:108] docker network embed-certs-443380 192.168.85.0/24 created
	I1119 22:33:01.796091  252325 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-443380" container
	I1119 22:33:01.796140  252325 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:33:01.812987  252325 cli_runner.go:164] Run: docker volume create embed-certs-443380 --label name.minikube.sigs.k8s.io=embed-certs-443380 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:33:01.830803  252325 oci.go:103] Successfully created a docker volume embed-certs-443380
	I1119 22:33:01.830903  252325 cli_runner.go:164] Run: docker run --rm --name embed-certs-443380-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-443380 --entrypoint /usr/bin/test -v embed-certs-443380:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:33:02.226183  252325 oci.go:107] Successfully prepared a docker volume embed-certs-443380
	I1119 22:33:02.226240  252325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:02.226267  252325 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:33:02.226337  252325 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-443380:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	W1119 22:33:04.607874  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:07.108171  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	I1119 22:33:06.674938  252325 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-443380:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.448558667s)
	I1119 22:33:06.674965  252325 kic.go:203] duration metric: took 4.448710608s to extract preloaded images to volume ...
	W1119 22:33:06.675032  252325 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:33:06.675067  252325 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:33:06.675114  252325 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:33:06.730759  252325 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-443380 --name embed-certs-443380 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-443380 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-443380 --network embed-certs-443380 --ip 192.168.85.2 --volume embed-certs-443380:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:33:07.025074  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Running}}
	I1119 22:33:07.043685  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:07.061043  252325 cli_runner.go:164] Run: docker exec embed-certs-443380 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:33:07.107128  252325 oci.go:144] the created container "embed-certs-443380" has a running status.
	I1119 22:33:07.107156  252325 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa...
	I1119 22:33:07.271265  252325 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:33:07.297185  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:07.319017  252325 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:33:07.319041  252325 kic_runner.go:114] Args: [docker exec --privileged embed-certs-443380 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:33:07.369406  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:07.390718  252325 machine.go:94] provisionDockerMachine start ...
	I1119 22:33:07.390847  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:07.410701  252325 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:07.411073  252325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1119 22:33:07.411096  252325 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:33:07.536529  252325 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-443380
	
	I1119 22:33:07.536561  252325 ubuntu.go:182] provisioning hostname "embed-certs-443380"
	I1119 22:33:07.536611  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:07.555612  252325 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:07.555911  252325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1119 22:33:07.555936  252325 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-443380 && echo "embed-certs-443380" | sudo tee /etc/hostname
	I1119 22:33:07.692510  252325 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-443380
	
	I1119 22:33:07.692623  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:07.710692  252325 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:07.710918  252325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1119 22:33:07.710936  252325 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-443380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-443380/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-443380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:33:07.833957  252325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:33:07.833985  252325 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:33:07.834025  252325 ubuntu.go:190] setting up certificates
	I1119 22:33:07.834039  252325 provision.go:84] configureAuth start
	I1119 22:33:07.834092  252325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:33:07.853762  252325 provision.go:143] copyHostCerts
	I1119 22:33:07.853838  252325 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:33:07.853853  252325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:33:07.853932  252325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:33:07.854052  252325 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:33:07.854064  252325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:33:07.854102  252325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:33:07.854199  252325 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:33:07.854210  252325 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:33:07.854249  252325 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:33:07.854322  252325 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.embed-certs-443380 san=[127.0.0.1 192.168.85.2 embed-certs-443380 localhost minikube]
	I1119 22:33:07.974478  252325 provision.go:177] copyRemoteCerts
	I1119 22:33:07.974531  252325 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:33:07.974564  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:07.991932  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.083359  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:33:08.101926  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:33:08.119318  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:33:08.135640  252325 provision.go:87] duration metric: took 301.58365ms to configureAuth
	I1119 22:33:08.135664  252325 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:33:08.135807  252325 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:08.135918  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.153751  252325 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:08.153984  252325 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1119 22:33:08.154006  252325 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:33:08.414470  252325 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:33:08.414501  252325 machine.go:97] duration metric: took 1.023758944s to provisionDockerMachine
	I1119 22:33:08.414515  252325 client.go:176] duration metric: took 6.735469813s to LocalClient.Create
	I1119 22:33:08.414535  252325 start.go:167] duration metric: took 6.735531465s to libmachine.API.Create "embed-certs-443380"
	I1119 22:33:08.414546  252325 start.go:293] postStartSetup for "embed-certs-443380" (driver="docker")
	I1119 22:33:08.414564  252325 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:33:08.414662  252325 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:33:08.414715  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.431992  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.524868  252325 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:33:08.528237  252325 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:33:08.528259  252325 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:33:08.528269  252325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:33:08.528312  252325 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:33:08.528387  252325 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:33:08.528503  252325 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:33:08.535937  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:08.554743  252325 start.go:296] duration metric: took 140.180844ms for postStartSetup
	I1119 22:33:08.555113  252325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:33:08.572861  252325 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/config.json ...
	I1119 22:33:08.573077  252325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:33:08.573118  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.590191  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.680487  252325 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:33:08.685102  252325 start.go:128] duration metric: took 7.007901367s to createHost
	I1119 22:33:08.685126  252325 start.go:83] releasing machines lock for "embed-certs-443380", held for 7.008041998s
	I1119 22:33:08.685186  252325 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:33:08.703503  252325 ssh_runner.go:195] Run: cat /version.json
	I1119 22:33:08.703549  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.703592  252325 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:33:08.703666  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:08.721011  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.721943  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:08.881942  252325 ssh_runner.go:195] Run: systemctl --version
	I1119 22:33:08.888240  252325 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:33:08.922369  252325 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:33:08.926928  252325 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:33:08.926999  252325 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:33:08.951143  252325 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:33:08.951165  252325 start.go:496] detecting cgroup driver to use...
	I1119 22:33:08.951195  252325 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:33:08.951235  252325 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:33:08.966336  252325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:33:08.977800  252325 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:33:08.977875  252325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:33:08.993074  252325 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:33:09.010878  252325 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:33:09.089917  252325 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:33:09.173998  252325 docker.go:234] disabling docker service ...
	I1119 22:33:09.174060  252325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:33:09.191939  252325 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:33:09.203668  252325 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:33:09.286446  252325 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:33:09.370884  252325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:33:09.382304  252325 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:33:09.395806  252325 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:33:09.395869  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.405703  252325 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:33:09.405765  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.414903  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.423682  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.432014  252325 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:33:09.440550  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.450361  252325 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.463183  252325 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:09.472281  252325 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:33:09.479192  252325 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:33:09.486205  252325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:09.572673  252325 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:33:09.712065  252325 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:33:09.712122  252325 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:33:09.715745  252325 start.go:564] Will wait 60s for crictl version
	I1119 22:33:09.715788  252325 ssh_runner.go:195] Run: which crictl
	I1119 22:33:09.719102  252325 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:33:09.743243  252325 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:33:09.743315  252325 ssh_runner.go:195] Run: crio --version
	I1119 22:33:09.771239  252325 ssh_runner.go:195] Run: crio --version
	I1119 22:33:09.798320  252325 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:33:09.799411  252325 cli_runner.go:164] Run: docker network inspect embed-certs-443380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:09.818975  252325 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:33:09.822938  252325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:09.833252  252325 kubeadm.go:884] updating cluster {Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:33:09.833383  252325 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:09.833437  252325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:09.864567  252325 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:09.864586  252325 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:33:09.864627  252325 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:09.888156  252325 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:09.888172  252325 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:33:09.888179  252325 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 22:33:09.888264  252325 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-443380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:33:09.888348  252325 ssh_runner.go:195] Run: crio config
	I1119 22:33:09.931564  252325 cni.go:84] Creating CNI manager for ""
	I1119 22:33:09.931589  252325 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:09.931609  252325 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:33:09.931634  252325 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-443380 NodeName:embed-certs-443380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:33:09.931800  252325 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-443380"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:33:09.931876  252325 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:33:09.939707  252325 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:33:09.939772  252325 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:33:09.947057  252325 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 22:33:09.958831  252325 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:33:09.972980  252325 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:33:09.984521  252325 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:33:09.987795  252325 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:09.996810  252325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:10.072984  252325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:33:10.095866  252325 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380 for IP: 192.168.85.2
	I1119 22:33:10.095886  252325 certs.go:195] generating shared ca certs ...
	I1119 22:33:10.095901  252325 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.096020  252325 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:33:10.096071  252325 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:33:10.096083  252325 certs.go:257] generating profile certs ...
	I1119 22:33:10.096132  252325 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.key
	I1119 22:33:10.096149  252325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.crt with IP's: []
	I1119 22:33:10.283329  252325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.crt ...
	I1119 22:33:10.283353  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.crt: {Name:mkbf7ad9fcf142ca89ca73eee96635beed02dbb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.283521  252325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.key ...
	I1119 22:33:10.283539  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.key: {Name:mk187744faba3bdf35a617d99549b829a4312db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.283621  252325 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78
	I1119 22:33:10.283635  252325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt.8b1e4b78 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1119 22:33:10.617647  252325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt.8b1e4b78 ...
	I1119 22:33:10.617670  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt.8b1e4b78: {Name:mk6a8b92fbdf38f1d80b191920927bfc89cab752 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.617810  252325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78 ...
	I1119 22:33:10.617831  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78: {Name:mk3e1c4c577d13edada9e089fe5ea5d95f8f8e71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.617903  252325 certs.go:382] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt.8b1e4b78 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt
	I1119 22:33:10.617990  252325 certs.go:386] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key
	I1119 22:33:10.618051  252325 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key
	I1119 22:33:10.618066  252325 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt with IP's: []
	I1119 22:33:10.666118  252325 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt ...
	I1119 22:33:10.666141  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt: {Name:mk6f829de03d54e483844ba54310472124343694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.666274  252325 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key ...
	I1119 22:33:10.666286  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key: {Name:mk979cb89f953ae262da4eba61240efab10eb0b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:10.666454  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:33:10.666486  252325 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:33:10.666508  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:33:10.666534  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:33:10.666555  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:33:10.666575  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:33:10.666612  252325 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:10.667149  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:33:10.685354  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:33:10.701745  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:33:10.717586  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:33:10.733709  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 22:33:10.750064  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:33:10.766215  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:33:10.783123  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:33:10.800167  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:33:10.820851  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:33:10.837560  252325 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:33:10.853839  252325 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:33:10.865865  252325 ssh_runner.go:195] Run: openssl version
	I1119 22:33:10.871617  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:33:10.879308  252325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:33:10.882566  252325 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:33:10.882615  252325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:33:10.916836  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:33:10.924469  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:33:10.932228  252325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:33:10.935677  252325 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:33:10.935714  252325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:33:10.971166  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:33:10.978907  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:33:10.986645  252325 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:10.990018  252325 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:10.990059  252325 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:11.023660  252325 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:33:11.031967  252325 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:33:11.035349  252325 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:33:11.035406  252325 kubeadm.go:401] StartCluster: {Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:11.035484  252325 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:33:11.035538  252325 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:33:11.061851  252325 cri.go:89] found id: ""
	I1119 22:33:11.061906  252325 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:33:11.069375  252325 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:33:11.076947  252325 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:33:11.076988  252325 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:33:11.084427  252325 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:33:11.084443  252325 kubeadm.go:158] found existing configuration files:
	
	I1119 22:33:11.084483  252325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1119 22:33:11.091939  252325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:33:11.091982  252325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:33:11.099484  252325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1119 22:33:11.107042  252325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:33:11.107089  252325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:33:11.113840  252325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1119 22:33:11.120960  252325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:33:11.121002  252325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:33:11.127563  252325 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1119 22:33:11.134481  252325 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:33:11.134519  252325 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:33:11.141082  252325 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:33:11.179250  252325 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:33:11.179314  252325 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:33:11.199591  252325 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:33:11.199695  252325 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:33:11.199759  252325 kubeadm.go:319] OS: Linux
	I1119 22:33:11.199809  252325 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:33:11.199884  252325 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:33:11.199982  252325 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:33:11.200064  252325 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:33:11.200146  252325 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:33:11.200187  252325 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:33:11.200237  252325 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:33:11.200278  252325 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:33:11.255487  252325 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:33:11.255632  252325 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:33:11.255770  252325 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:33:11.263031  252325 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:33:11.265108  252325 out.go:252]   - Generating certificates and keys ...
	I1119 22:33:11.265194  252325 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:33:11.265276  252325 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:33:09.395630  229026 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054534132s)
	W1119 22:33:09.395669  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1119 22:33:09.395678  229026 logs.go:123] Gathering logs for kube-apiserver [49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3] ...
	I1119 22:33:09.395691  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3"
	I1119 22:33:09.426003  229026 logs.go:123] Gathering logs for kube-controller-manager [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a] ...
	I1119 22:33:09.426025  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:09.451373  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:09.451393  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:09.494706  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:09.494728  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:09.530478  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:09.530513  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:09.561557  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:09.561584  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:09.608153  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:09.608189  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:09.687781  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:09.687824  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1119 22:33:09.607025  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:11.607513  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	I1119 22:33:11.514090  252325 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:33:12.150795  252325 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:33:12.437161  252325 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:33:12.523242  252325 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:33:12.591260  252325 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:33:12.591495  252325 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-443380 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:33:12.659650  252325 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:33:12.659845  252325 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-443380 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1119 22:33:12.829884  252325 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:33:13.186082  252325 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:33:13.413475  252325 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:33:13.413714  252325 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:33:13.448006  252325 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:33:13.575106  252325 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:33:13.891287  252325 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:33:14.006907  252325 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:33:14.451405  252325 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:33:14.451998  252325 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:33:14.456348  252325 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:33:14.458346  252325 out.go:252]   - Booting up control plane ...
	I1119 22:33:14.458469  252325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:33:14.458576  252325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:33:14.459419  252325 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:33:14.486797  252325 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:33:14.486991  252325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:33:14.493486  252325 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:33:14.493708  252325 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:33:14.493776  252325 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:33:14.595843  252325 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:33:14.595995  252325 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:33:15.097121  252325 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.37281ms
	I1119 22:33:15.100255  252325 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:33:15.100358  252325 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1119 22:33:15.100463  252325 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:33:15.100563  252325 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:33:12.202183  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:12.928507  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": read tcp 192.168.94.1:38244->192.168.94.2:8443: read: connection reset by peer
	I1119 22:33:12.928582  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:12.928646  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:12.956386  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:12.956405  229026 cri.go:89] found id: "49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3"
	I1119 22:33:12.956409  229026 cri.go:89] found id: ""
	I1119 22:33:12.956416  229026 logs.go:282] 2 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f 49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3]
	I1119 22:33:12.956464  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:12.960303  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:12.963926  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:12.963980  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:12.990072  229026 cri.go:89] found id: ""
	I1119 22:33:12.990099  229026 logs.go:282] 0 containers: []
	W1119 22:33:12.990107  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:12.990114  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:12.990172  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:13.014444  229026 cri.go:89] found id: ""
	I1119 22:33:13.014466  229026 logs.go:282] 0 containers: []
	W1119 22:33:13.014476  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:13.014483  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:13.014524  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:13.042799  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:13.042832  229026 cri.go:89] found id: ""
	I1119 22:33:13.042843  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:13.042892  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:13.047157  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:13.047219  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:13.077530  229026 cri.go:89] found id: ""
	I1119 22:33:13.077554  229026 logs.go:282] 0 containers: []
	W1119 22:33:13.077563  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:13.077570  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:13.077628  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:13.104431  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:13.104454  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:13.104460  229026 cri.go:89] found id: ""
	I1119 22:33:13.104469  229026 logs.go:282] 2 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:33:13.104520  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:13.108896  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:13.112444  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:13.112496  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:13.137880  229026 cri.go:89] found id: ""
	I1119 22:33:13.137898  229026 logs.go:282] 0 containers: []
	W1119 22:33:13.137905  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:13.137912  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:13.137958  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:13.164525  229026 cri.go:89] found id: ""
	I1119 22:33:13.164547  229026 logs.go:282] 0 containers: []
	W1119 22:33:13.164557  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:13.164573  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:13.164585  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:13.206975  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:13.207002  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:13.235928  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:13.235949  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:13.289992  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:13.290010  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:13.290037  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:13.321096  229026 logs.go:123] Gathering logs for kube-apiserver [49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3] ...
	I1119 22:33:13.321120  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 49b0ae6bfba1ff83b92521f4eb8438b3ee0eb5aefcf215bbe6719378487397f3"
	I1119 22:33:13.352184  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:13.352209  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:13.397767  229026 logs.go:123] Gathering logs for kube-controller-manager [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a] ...
	I1119 22:33:13.397792  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:13.422753  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:13.422777  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:13.502340  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:13.502369  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:13.516857  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:13.516884  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:16.043246  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:16.043702  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:16.043773  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:16.043844  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:16.078907  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:16.078931  229026 cri.go:89] found id: ""
	I1119 22:33:16.078942  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:16.078995  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:16.083927  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:16.083983  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:16.117131  229026 cri.go:89] found id: ""
	I1119 22:33:16.117154  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.117164  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:16.117171  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:16.117237  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:16.150229  229026 cri.go:89] found id: ""
	I1119 22:33:16.150254  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.150264  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:16.150272  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:16.150332  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:16.179213  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:16.179309  229026 cri.go:89] found id: ""
	I1119 22:33:16.179320  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:16.179377  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:16.184121  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:16.184179  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:16.214359  229026 cri.go:89] found id: ""
	I1119 22:33:16.214411  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.214425  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:16.214433  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:16.214481  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:16.241647  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:16.241671  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:16.241676  229026 cri.go:89] found id: ""
	I1119 22:33:16.241685  229026 logs.go:282] 2 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:33:16.241732  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:16.245495  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:16.249060  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:16.249112  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:16.275319  229026 cri.go:89] found id: ""
	I1119 22:33:16.275343  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.275352  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:16.275360  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:16.275412  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:16.302515  229026 cri.go:89] found id: ""
	I1119 22:33:16.302536  229026 logs.go:282] 0 containers: []
	W1119 22:33:16.302546  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:16.302561  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:16.302576  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:16.360426  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:16.360447  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:16.360461  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:16.393052  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:16.393077  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:16.439523  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:16.439547  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:16.466536  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:16.466558  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:16.496269  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:16.496294  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1119 22:33:14.107160  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:16.107886  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	I1119 22:33:16.613113  252325 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.51274894s
	I1119 22:33:17.589654  252325 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.489350043s
	I1119 22:33:19.102850  252325 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002453473s
	I1119 22:33:19.116709  252325 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:33:19.125547  252325 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:33:19.134777  252325 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:33:19.135084  252325 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-443380 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:33:19.143538  252325 kubeadm.go:319] [bootstrap-token] Using token: cdv89u.y5fxq8cgb9yspbwe
	
	
	==> CRI-O <==
	Nov 19 22:32:46 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:46.905274359Z" level=info msg="Created container bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv/kubernetes-dashboard" id=fd20536a-bbdc-41f6-b2ef-0a00b4d16f6e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:32:46 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:46.905779366Z" level=info msg="Starting container: bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22" id=3d8989b6-2c87-4973-a07c-21f023c5e493 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:32:46 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:46.907523072Z" level=info msg="Started container" PID=1757 containerID=bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv/kubernetes-dashboard id=3d8989b6-2c87-4973-a07c-21f023c5e493 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fce467e2331d0ba19f583d23ddcaddb494c73e31bb071e307004f956f244e8f0
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.442176966Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a385d1e3-4acd-44d7-8687-22acd3a05d7a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.443002524Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a941a876-dcc6-4001-9386-d0665ac5a57d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.443993603Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=efe0841f-6834-4b62-bf55-c76d93e75ca9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.444103323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.448271828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.448440588Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/19e5e30eccdf59e32488686de5ffa10c2f74cc958c2de7b46e978ebbceee2c2d/merged/etc/passwd: no such file or directory"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.448471027Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/19e5e30eccdf59e32488686de5ffa10c2f74cc958c2de7b46e978ebbceee2c2d/merged/etc/group: no such file or directory"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.448722025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.475210138Z" level=info msg="Created container 73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b: kube-system/storage-provisioner/storage-provisioner" id=efe0841f-6834-4b62-bf55-c76d93e75ca9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.47564428Z" level=info msg="Starting container: 73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b" id=528f98b4-df58-4bf0-902c-d55a1a722d50 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:32:59 old-k8s-version-680619 crio[576]: time="2025-11-19T22:32:59.477375821Z" level=info msg="Started container" PID=1780 containerID=73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b description=kube-system/storage-provisioner/storage-provisioner id=528f98b4-df58-4bf0-902c-d55a1a722d50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a3605f662dd55f24b8446f72ed24f2a97edc0868fa4f35f3bc26074e8fd22b37
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.348126131Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4bb14b9c-fe17-4a05-a337-04ee6b52f029 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.349150787Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=87485864-9c19-4bb4-980d-58dafa2bf71c name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.350274669Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6/dashboard-metrics-scraper" id=7b9a80df-bc5f-4468-8f64-fd82c49d9e3a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.350407024Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.357714761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.358406903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.390133516Z" level=info msg="Created container 81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6/dashboard-metrics-scraper" id=7b9a80df-bc5f-4468-8f64-fd82c49d9e3a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.39080061Z" level=info msg="Starting container: 81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a" id=8d5ba1c9-f232-4d39-8e21-668a9836c665 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.392801137Z" level=info msg="Started container" PID=1799 containerID=81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6/dashboard-metrics-scraper id=8d5ba1c9-f232-4d39-8e21-668a9836c665 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e56d3bb7a822f016b06026f48d31d7f7539cf62479007fe218cee1adad807a5
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.453810377Z" level=info msg="Removing container: 452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1" id=cd664628-8c59-4e66-8023-05d9af26486e name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:33:02 old-k8s-version-680619 crio[576]: time="2025-11-19T22:33:02.464789358Z" level=info msg="Removed container 452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6/dashboard-metrics-scraper" id=cd664628-8c59-4e66-8023-05d9af26486e name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	81d3c1d628f85       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   8e56d3bb7a822       dashboard-metrics-scraper-5f989dc9cf-qbkv6       kubernetes-dashboard
	73d881498864d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   a3605f662dd55       storage-provisioner                              kube-system
	bd5fb08be8644       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   fce467e2331d0       kubernetes-dashboard-8694d4445c-gv4nv            kubernetes-dashboard
	c95f5a9eb7aae       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   68771cbb95217       busybox                                          default
	5eb7bc276d9ed       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   2bb9164e8eb3d       coredns-5dd5756b68-7bkvq                         kube-system
	414de1bfb6aec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   a3605f662dd55       storage-provisioner                              kube-system
	631bdbae35af1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   b1fd22c19999b       kindnet-mf7gh                                    kube-system
	60edd37d9535b       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   ae4dc1e276392       kube-proxy-4xxp4                                 kube-system
	89080922c0159       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   a4eca8069d7ce       kube-apiserver-old-k8s-version-680619            kube-system
	8f84773f44821       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   7102a0927a307       kube-controller-manager-old-k8s-version-680619   kube-system
	b26645bb06793       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   9b4fa65b6a4c9       kube-scheduler-old-k8s-version-680619            kube-system
	b40d5aa13f158       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   5f130fbaf2759       etcd-old-k8s-version-680619                      kube-system
	
	
	==> coredns [5eb7bc276d9ede83cf7f9707c5d154ff245634aef28f4966849644db5a50f3a7] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53410 - 7682 "HINFO IN 4240874763406610744.8842321771112109544. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048473784s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-680619
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-680619
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=old-k8s-version-680619
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_31_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:31:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-680619
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:33:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:32:59 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:32:59 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:32:59 +0000   Wed, 19 Nov 2025 22:31:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:32:59 +0000   Wed, 19 Nov 2025 22:31:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-680619
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                58ea2120-251a-483f-9bb0-1cfccac1ceba
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-5dd5756b68-7bkvq                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-old-k8s-version-680619                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-mf7gh                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-old-k8s-version-680619             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-old-k8s-version-680619    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-4xxp4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-old-k8s-version-680619             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-qbkv6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-gv4nv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node old-k8s-version-680619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node old-k8s-version-680619 event: Registered Node old-k8s-version-680619 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-680619 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-680619 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-680619 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-680619 event: Registered Node old-k8s-version-680619 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [b40d5aa13f1581f4d75fa92e103d0cc9932c695d82287850952ad9cce1d98ba5] <==
	{"level":"info","ts":"2025-11-19T22:32:25.903968Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:32:25.903982Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-19T22:32:25.904157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-19T22:32:25.904237Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-19T22:32:25.904341Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:32:25.904382Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-19T22:32:25.906846Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-19T22:32:25.906901Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:32:25.90807Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-19T22:32:25.908355Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-19T22:32:25.908762Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-19T22:32:27.095574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-19T22:32:27.09562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-19T22:32:27.095648Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-19T22:32:27.095662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-19T22:32:27.095667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T22:32:27.095674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-19T22:32:27.095685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-19T22:32:27.096685Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-680619 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-19T22:32:27.096697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:32:27.096725Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-19T22:32:27.096902Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-19T22:32:27.09693Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-19T22:32:27.097987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-19T22:32:27.097988Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:33:20 up  1:15,  0 user,  load average: 2.75, 2.76, 1.82
	Linux old-k8s-version-680619 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [631bdbae35af1a0fab26aaa35346ef686031049223c980f1c1523d8c16183109] <==
	I1119 22:32:28.872579       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:32:28.872790       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:32:28.872939       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:32:28.872955       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:32:28.872979       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:32:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:32:29.076257       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:32:29.076302       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:32:29.076317       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:32:29.076471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:32:29.567956       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:32:29.567979       1 metrics.go:72] Registering metrics
	I1119 22:32:29.568054       1 controller.go:711] "Syncing nftables rules"
	I1119 22:32:39.081934       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:32:39.081981       1 main.go:301] handling current node
	I1119 22:32:49.076397       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:32:49.076436       1 main.go:301] handling current node
	I1119 22:32:59.076294       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:32:59.076324       1 main.go:301] handling current node
	I1119 22:33:09.078781       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:33:09.078832       1 main.go:301] handling current node
	I1119 22:33:19.083953       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:33:19.083990       1 main.go:301] handling current node
	
	
	==> kube-apiserver [89080922c0159e21e61091b24b9351b5cb28d703c1ed3ad99034c55326191766] <==
	I1119 22:32:28.027440       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:32:28.047723       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1119 22:32:28.092945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1119 22:32:28.093024       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1119 22:32:28.093043       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1119 22:32:28.093144       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:32:28.093165       1 shared_informer.go:318] Caches are synced for configmaps
	I1119 22:32:28.093210       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1119 22:32:28.093244       1 aggregator.go:166] initial CRD sync complete...
	I1119 22:32:28.093253       1 autoregister_controller.go:141] Starting autoregister controller
	I1119 22:32:28.093258       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:32:28.093265       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:32:28.093357       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1119 22:32:28.098941       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:32:28.836727       1 controller.go:624] quota admission added evaluator for: namespaces
	I1119 22:32:28.862316       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1119 22:32:28.877332       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:32:28.883434       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:32:28.890988       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1119 22:32:28.925972       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.86.19"}
	I1119 22:32:28.937849       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.200.168"}
	I1119 22:32:28.998294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:32:40.257102       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1119 22:32:40.279625       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:32:40.287414       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [8f84773f448215b180ee3539cd8a463b1872e20afd8aa7857fae9f872b39a9c0] <==
	I1119 22:32:40.282686       1 shared_informer.go:318] Caches are synced for cronjob
	I1119 22:32:40.292014       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1119 22:32:40.292299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="13.828496ms"
	I1119 22:32:40.292394       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.682µs"
	I1119 22:32:40.294633       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.415385ms"
	I1119 22:32:40.294730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="59.524µs"
	I1119 22:32:40.298619       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.896µs"
	I1119 22:32:40.345084       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1119 22:32:40.347325       1 shared_informer.go:318] Caches are synced for job
	I1119 22:32:40.359612       1 shared_informer.go:318] Caches are synced for persistent volume
	I1119 22:32:40.374226       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:32:40.410721       1 shared_informer.go:318] Caches are synced for resource quota
	I1119 22:32:40.463422       1 shared_informer.go:318] Caches are synced for HPA
	I1119 22:32:40.777128       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:32:40.777159       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1119 22:32:40.784273       1 shared_informer.go:318] Caches are synced for garbage collector
	I1119 22:32:43.409720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.297µs"
	I1119 22:32:44.414785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="140.986µs"
	I1119 22:32:45.417946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="129.857µs"
	I1119 22:32:47.431019       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.404981ms"
	I1119 22:32:47.431101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="36.843µs"
	I1119 22:33:01.732121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.535111ms"
	I1119 22:33:01.732274       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.258µs"
	I1119 22:33:02.463115       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="83.275µs"
	I1119 22:33:10.588094       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.612µs"
	
	
	==> kube-proxy [60edd37d9535be6816f9e4f45d547b93a9514cd2c28698c56bf7d909151f9696] <==
	I1119 22:32:28.743853       1 server_others.go:69] "Using iptables proxy"
	I1119 22:32:28.752754       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1119 22:32:28.771359       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:32:28.773633       1 server_others.go:152] "Using iptables Proxier"
	I1119 22:32:28.773668       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1119 22:32:28.773677       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1119 22:32:28.773710       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1119 22:32:28.774068       1 server.go:846] "Version info" version="v1.28.0"
	I1119 22:32:28.774088       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:32:28.775514       1 config.go:188] "Starting service config controller"
	I1119 22:32:28.776398       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1119 22:32:28.775623       1 config.go:315] "Starting node config controller"
	I1119 22:32:28.776497       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1119 22:32:28.776100       1 config.go:97] "Starting endpoint slice config controller"
	I1119 22:32:28.776545       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1119 22:32:28.877358       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1119 22:32:28.877393       1 shared_informer.go:318] Caches are synced for service config
	I1119 22:32:28.877411       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b26645bb067934e3f245a0dc0ee3200d5ec7b936438cf91b80afef3be85e62af] <==
	I1119 22:32:26.256886       1 serving.go:348] Generated self-signed cert in-memory
	I1119 22:32:28.046318       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1119 22:32:28.046339       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:32:28.050087       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1119 22:32:28.050112       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1119 22:32:28.050120       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:32:28.050133       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1119 22:32:28.050161       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:32:28.050184       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 22:32:28.051037       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1119 22:32:28.051345       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1119 22:32:28.150931       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1119 22:32:28.150944       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1119 22:32:28.150931       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 19 22:32:40 old-k8s-version-680619 kubelet[742]: I1119 22:32:40.387036     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/742e4e38-0bcd-405e-8b42-aa37e875d6b6-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-gv4nv\" (UID: \"742e4e38-0bcd-405e-8b42-aa37e875d6b6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv"
	Nov 19 22:32:40 old-k8s-version-680619 kubelet[742]: I1119 22:32:40.387096     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jq9n\" (UniqueName: \"kubernetes.io/projected/742e4e38-0bcd-405e-8b42-aa37e875d6b6-kube-api-access-5jq9n\") pod \"kubernetes-dashboard-8694d4445c-gv4nv\" (UID: \"742e4e38-0bcd-405e-8b42-aa37e875d6b6\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv"
	Nov 19 22:32:40 old-k8s-version-680619 kubelet[742]: I1119 22:32:40.387183     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pht\" (UniqueName: \"kubernetes.io/projected/98c7d01b-bf85-4e98-b193-c023a7d173da-kube-api-access-t5pht\") pod \"dashboard-metrics-scraper-5f989dc9cf-qbkv6\" (UID: \"98c7d01b-bf85-4e98-b193-c023a7d173da\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6"
	Nov 19 22:32:40 old-k8s-version-680619 kubelet[742]: I1119 22:32:40.387237     742 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/98c7d01b-bf85-4e98-b193-c023a7d173da-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-qbkv6\" (UID: \"98c7d01b-bf85-4e98-b193-c023a7d173da\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6"
	Nov 19 22:32:43 old-k8s-version-680619 kubelet[742]: I1119 22:32:43.398148     742 scope.go:117] "RemoveContainer" containerID="14802135cedc5baccdefb683aaae6d5e500cddbb637863c78c0d85a34ddfffd6"
	Nov 19 22:32:44 old-k8s-version-680619 kubelet[742]: I1119 22:32:44.402115     742 scope.go:117] "RemoveContainer" containerID="14802135cedc5baccdefb683aaae6d5e500cddbb637863c78c0d85a34ddfffd6"
	Nov 19 22:32:44 old-k8s-version-680619 kubelet[742]: I1119 22:32:44.402423     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:32:44 old-k8s-version-680619 kubelet[742]: E1119 22:32:44.402806     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:32:45 old-k8s-version-680619 kubelet[742]: I1119 22:32:45.406157     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:32:45 old-k8s-version-680619 kubelet[742]: E1119 22:32:45.406524     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:32:47 old-k8s-version-680619 kubelet[742]: I1119 22:32:47.424681     742 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-gv4nv" podStartSLOduration=1.15831161 podCreationTimestamp="2025-11-19 22:32:40 +0000 UTC" firstStartedPulling="2025-11-19 22:32:40.604359479 +0000 UTC m=+15.347348323" lastFinishedPulling="2025-11-19 22:32:46.870669637 +0000 UTC m=+21.613658481" observedRunningTime="2025-11-19 22:32:47.424391276 +0000 UTC m=+22.167380129" watchObservedRunningTime="2025-11-19 22:32:47.424621768 +0000 UTC m=+22.167610619"
	Nov 19 22:32:50 old-k8s-version-680619 kubelet[742]: I1119 22:32:50.578641     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:32:50 old-k8s-version-680619 kubelet[742]: E1119 22:32:50.578928     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:32:59 old-k8s-version-680619 kubelet[742]: I1119 22:32:59.441724     742 scope.go:117] "RemoveContainer" containerID="414de1bfb6aec59dd4d86bf9f2fea33a808f25024113c43be7b1c30c813216b0"
	Nov 19 22:33:02 old-k8s-version-680619 kubelet[742]: I1119 22:33:02.347373     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:33:02 old-k8s-version-680619 kubelet[742]: I1119 22:33:02.452552     742 scope.go:117] "RemoveContainer" containerID="452d1283ab638ca6da86ccda58904c22ca7f50e4effe7bf842d7892d21b5b5c1"
	Nov 19 22:33:02 old-k8s-version-680619 kubelet[742]: I1119 22:33:02.452770     742 scope.go:117] "RemoveContainer" containerID="81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a"
	Nov 19 22:33:02 old-k8s-version-680619 kubelet[742]: E1119 22:33:02.453171     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:33:10 old-k8s-version-680619 kubelet[742]: I1119 22:33:10.578477     742 scope.go:117] "RemoveContainer" containerID="81d3c1d628f853bffae4962cfb39a41d6c3f8c69fa48ced6a62a0d44d733bc6a"
	Nov 19 22:33:10 old-k8s-version-680619 kubelet[742]: E1119 22:33:10.578756     742 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-qbkv6_kubernetes-dashboard(98c7d01b-bf85-4e98-b193-c023a7d173da)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-qbkv6" podUID="98c7d01b-bf85-4e98-b193-c023a7d173da"
	Nov 19 22:33:15 old-k8s-version-680619 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:33:15 old-k8s-version-680619 kubelet[742]: I1119 22:33:15.492667     742 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 19 22:33:15 old-k8s-version-680619 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:33:15 old-k8s-version-680619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:33:15 old-k8s-version-680619 systemd[1]: kubelet.service: Consumed 1.336s CPU time.
	
	
	==> kubernetes-dashboard [bd5fb08be8644f6048088566e05635f71fa87e5bcbb3288f49cac595a87fda22] <==
	2025/11/19 22:32:46 Using namespace: kubernetes-dashboard
	2025/11/19 22:32:46 Using in-cluster config to connect to apiserver
	2025/11/19 22:32:46 Using secret token for csrf signing
	2025/11/19 22:32:46 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:32:46 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:32:46 Successful initial request to the apiserver, version: v1.28.0
	2025/11/19 22:32:46 Generating JWE encryption key
	2025/11/19 22:32:46 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:32:46 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:32:47 Initializing JWE encryption key from synchronized object
	2025/11/19 22:32:47 Creating in-cluster Sidecar client
	2025/11/19 22:32:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:32:47 Serving insecurely on HTTP port: 9090
	2025/11/19 22:33:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:32:46 Starting overwatch
	
	
	==> storage-provisioner [414de1bfb6aec59dd4d86bf9f2fea33a808f25024113c43be7b1c30c813216b0] <==
	I1119 22:32:28.716117       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:32:58.718313       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [73d881498864dc714b5d899ea9180eb714418ab2c2ac07c6c45be10aa174996b] <==
	I1119 22:32:59.489249       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:32:59.495964       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:32:59.496002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1119 22:33:16.891556       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:33:16.891721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680619_53e05059-dfbc-43f7-af5e-f3950d689b7d!
	I1119 22:33:16.891696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"aa8da102-18e8-4e00-96cc-7642d9f355a2", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-680619_53e05059-dfbc-43f7-af5e-f3950d689b7d became leader
	I1119 22:33:16.992496       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-680619_53e05059-dfbc-43f7-af5e-f3950d689b7d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680619 -n old-k8s-version-680619
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-680619 -n old-k8s-version-680619: exit status 2 (335.062466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-680619 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-178067 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-178067 --alsologtostderr -v=1: exit status 80 (2.261566857s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-178067 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:33:42.272383  261887 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:33:42.272499  261887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:42.272511  261887 out.go:374] Setting ErrFile to fd 2...
	I1119 22:33:42.272516  261887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:42.272744  261887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:33:42.272963  261887 out.go:368] Setting JSON to false
	I1119 22:33:42.273003  261887 mustload.go:66] Loading cluster: no-preload-178067
	I1119 22:33:42.273335  261887 config.go:182] Loaded profile config "no-preload-178067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:42.273693  261887 cli_runner.go:164] Run: docker container inspect no-preload-178067 --format={{.State.Status}}
	I1119 22:33:42.293399  261887 host.go:66] Checking if "no-preload-178067" exists ...
	I1119 22:33:42.293629  261887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:42.356620  261887 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-19 22:33:42.345397811 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:42.357376  261887 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-178067 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:33:42.361944  261887 out.go:179] * Pausing node no-preload-178067 ... 
	I1119 22:33:42.363103  261887 host.go:66] Checking if "no-preload-178067" exists ...
	I1119 22:33:42.363333  261887 ssh_runner.go:195] Run: systemctl --version
	I1119 22:33:42.363368  261887 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178067
	I1119 22:33:42.382774  261887 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/no-preload-178067/id_rsa Username:docker}
	I1119 22:33:42.476756  261887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:42.501889  261887 pause.go:52] kubelet running: true
	I1119 22:33:42.501961  261887 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:33:42.685960  261887 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:33:42.686072  261887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:33:42.754735  261887 cri.go:89] found id: "ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048"
	I1119 22:33:42.754769  261887 cri.go:89] found id: "5d9a3926452fe1153e2f2a4f626a6a7edc0937440208143a2bbde7bf7330c415"
	I1119 22:33:42.754775  261887 cri.go:89] found id: "63b4d5c69223fdefa7ca853e7e38f705bdc5541b5c4cdcb98fb26b40f27b3d10"
	I1119 22:33:42.754780  261887 cri.go:89] found id: "c4eb1fb19b099d7480679ca495008b509002cc63b9e988d15483d29f4cffa841"
	I1119 22:33:42.754784  261887 cri.go:89] found id: "86197cbc9c40eb4956802a892d3451ccc5f998c8c7d732efd889058c5af9dc86"
	I1119 22:33:42.754795  261887 cri.go:89] found id: "2b0da6046bd3a9d1409a02171cd110e7f7c80d13375006ef7726a6948b964a45"
	I1119 22:33:42.754798  261887 cri.go:89] found id: "4b15ce24a3aaf48f3b98e89cd8a66d0595225b1070cc8af2af5fbc40d5f34ef7"
	I1119 22:33:42.754800  261887 cri.go:89] found id: "8cdd1b2386fc9d6e80ae7431ec6d46c12963b7da1447247ecf7b9cd33805a53e"
	I1119 22:33:42.754803  261887 cri.go:89] found id: "a8dcf65794e2178ac75421c7fa689f31104856b8f819faab188b47806609c062"
	I1119 22:33:42.754830  261887 cri.go:89] found id: "1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed"
	I1119 22:33:42.754839  261887 cri.go:89] found id: "1b322960c77f50cdccffcfe8abe1d997e9c28f67a27b18ffb8d0b3ecb03a0409"
	I1119 22:33:42.754844  261887 cri.go:89] found id: ""
	I1119 22:33:42.754889  261887 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:33:42.766421  261887 retry.go:31] will retry after 169.952693ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:42Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:33:42.936840  261887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:42.949731  261887 pause.go:52] kubelet running: false
	I1119 22:33:42.949788  261887 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:33:43.089572  261887 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:33:43.089658  261887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:33:43.155883  261887 cri.go:89] found id: "ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048"
	I1119 22:33:43.155910  261887 cri.go:89] found id: "5d9a3926452fe1153e2f2a4f626a6a7edc0937440208143a2bbde7bf7330c415"
	I1119 22:33:43.155915  261887 cri.go:89] found id: "63b4d5c69223fdefa7ca853e7e38f705bdc5541b5c4cdcb98fb26b40f27b3d10"
	I1119 22:33:43.155920  261887 cri.go:89] found id: "c4eb1fb19b099d7480679ca495008b509002cc63b9e988d15483d29f4cffa841"
	I1119 22:33:43.155925  261887 cri.go:89] found id: "86197cbc9c40eb4956802a892d3451ccc5f998c8c7d732efd889058c5af9dc86"
	I1119 22:33:43.155930  261887 cri.go:89] found id: "2b0da6046bd3a9d1409a02171cd110e7f7c80d13375006ef7726a6948b964a45"
	I1119 22:33:43.155933  261887 cri.go:89] found id: "4b15ce24a3aaf48f3b98e89cd8a66d0595225b1070cc8af2af5fbc40d5f34ef7"
	I1119 22:33:43.155937  261887 cri.go:89] found id: "8cdd1b2386fc9d6e80ae7431ec6d46c12963b7da1447247ecf7b9cd33805a53e"
	I1119 22:33:43.155941  261887 cri.go:89] found id: "a8dcf65794e2178ac75421c7fa689f31104856b8f819faab188b47806609c062"
	I1119 22:33:43.155948  261887 cri.go:89] found id: "1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed"
	I1119 22:33:43.155952  261887 cri.go:89] found id: "1b322960c77f50cdccffcfe8abe1d997e9c28f67a27b18ffb8d0b3ecb03a0409"
	I1119 22:33:43.155956  261887 cri.go:89] found id: ""
	I1119 22:33:43.156008  261887 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:33:43.171475  261887 retry.go:31] will retry after 210.340366ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:43Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:33:43.382864  261887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:43.397481  261887 pause.go:52] kubelet running: false
	I1119 22:33:43.397540  261887 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:33:43.565646  261887 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:33:43.565758  261887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:33:43.636510  261887 cri.go:89] found id: "ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048"
	I1119 22:33:43.636535  261887 cri.go:89] found id: "5d9a3926452fe1153e2f2a4f626a6a7edc0937440208143a2bbde7bf7330c415"
	I1119 22:33:43.636541  261887 cri.go:89] found id: "63b4d5c69223fdefa7ca853e7e38f705bdc5541b5c4cdcb98fb26b40f27b3d10"
	I1119 22:33:43.636545  261887 cri.go:89] found id: "c4eb1fb19b099d7480679ca495008b509002cc63b9e988d15483d29f4cffa841"
	I1119 22:33:43.636549  261887 cri.go:89] found id: "86197cbc9c40eb4956802a892d3451ccc5f998c8c7d732efd889058c5af9dc86"
	I1119 22:33:43.636554  261887 cri.go:89] found id: "2b0da6046bd3a9d1409a02171cd110e7f7c80d13375006ef7726a6948b964a45"
	I1119 22:33:43.636558  261887 cri.go:89] found id: "4b15ce24a3aaf48f3b98e89cd8a66d0595225b1070cc8af2af5fbc40d5f34ef7"
	I1119 22:33:43.636561  261887 cri.go:89] found id: "8cdd1b2386fc9d6e80ae7431ec6d46c12963b7da1447247ecf7b9cd33805a53e"
	I1119 22:33:43.636566  261887 cri.go:89] found id: "a8dcf65794e2178ac75421c7fa689f31104856b8f819faab188b47806609c062"
	I1119 22:33:43.636586  261887 cri.go:89] found id: "1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed"
	I1119 22:33:43.636591  261887 cri.go:89] found id: "1b322960c77f50cdccffcfe8abe1d997e9c28f67a27b18ffb8d0b3ecb03a0409"
	I1119 22:33:43.636596  261887 cri.go:89] found id: ""
	I1119 22:33:43.636649  261887 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:33:43.648411  261887 retry.go:31] will retry after 547.500695ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:43Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:33:44.197572  261887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:44.215844  261887 pause.go:52] kubelet running: false
	I1119 22:33:44.216035  261887 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:33:44.383842  261887 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:33:44.383910  261887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:33:44.449622  261887 cri.go:89] found id: "ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048"
	I1119 22:33:44.449643  261887 cri.go:89] found id: "5d9a3926452fe1153e2f2a4f626a6a7edc0937440208143a2bbde7bf7330c415"
	I1119 22:33:44.449646  261887 cri.go:89] found id: "63b4d5c69223fdefa7ca853e7e38f705bdc5541b5c4cdcb98fb26b40f27b3d10"
	I1119 22:33:44.449649  261887 cri.go:89] found id: "c4eb1fb19b099d7480679ca495008b509002cc63b9e988d15483d29f4cffa841"
	I1119 22:33:44.449652  261887 cri.go:89] found id: "86197cbc9c40eb4956802a892d3451ccc5f998c8c7d732efd889058c5af9dc86"
	I1119 22:33:44.449655  261887 cri.go:89] found id: "2b0da6046bd3a9d1409a02171cd110e7f7c80d13375006ef7726a6948b964a45"
	I1119 22:33:44.449658  261887 cri.go:89] found id: "4b15ce24a3aaf48f3b98e89cd8a66d0595225b1070cc8af2af5fbc40d5f34ef7"
	I1119 22:33:44.449660  261887 cri.go:89] found id: "8cdd1b2386fc9d6e80ae7431ec6d46c12963b7da1447247ecf7b9cd33805a53e"
	I1119 22:33:44.449663  261887 cri.go:89] found id: "a8dcf65794e2178ac75421c7fa689f31104856b8f819faab188b47806609c062"
	I1119 22:33:44.449677  261887 cri.go:89] found id: "1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed"
	I1119 22:33:44.449680  261887 cri.go:89] found id: "1b322960c77f50cdccffcfe8abe1d997e9c28f67a27b18ffb8d0b3ecb03a0409"
	I1119 22:33:44.449684  261887 cri.go:89] found id: ""
	I1119 22:33:44.449727  261887 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:33:44.462963  261887 out.go:203] 
	W1119 22:33:44.464061  261887 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:33:44.464077  261887 out.go:285] * 
	* 
	W1119 22:33:44.468274  261887 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:33:44.469515  261887 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-178067 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-178067
helpers_test.go:243: (dbg) docker inspect no-preload-178067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37",
	        "Created": "2025-11-19T22:31:25.543221838Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 247397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:32:38.644503734Z",
	            "FinishedAt": "2025-11-19T22:32:37.760634905Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/hostname",
	        "HostsPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/hosts",
	        "LogPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37-json.log",
	        "Name": "/no-preload-178067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-178067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37",
	                "LowerDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178067",
	                "Source": "/var/lib/docker/volumes/no-preload-178067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178067",
	                "name.minikube.sigs.k8s.io": "no-preload-178067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3902cc69095f95a615fc7ef19c18587d730c38025b6ec3a50aa50e0aae990dd7",
	            "SandboxKey": "/var/run/docker/netns/3902cc69095f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-178067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2927e8174830464514428039b44b26b0e43356a4a3627c8d30f3646150dbf7f",
	                    "EndpointID": "80b5feed6a2485b52e7ce1305786570683a37f298dc094eb6424434caf03315b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:92:ea:15:6a:50",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178067",
	                        "4349f03a9605"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-178067 -n no-preload-178067
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-178067 -n no-preload-178067: exit status 2 (311.980972ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-178067 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-178067 logs -n 25: (1.106297469s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-801704    │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839          │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839          │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ delete  │ -p missing-upgrade-015670                                                                                                                                                                                                                     │ missing-upgrade-015670       │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p old-k8s-version-680619 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p no-preload-178067 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p no-preload-178067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p disable-driver-mounts-726490                                                                                                                                                                                                               │ disable-driver-mounts-726490 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:33:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:33:23.883705  257842 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:33:23.883983  257842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:23.883993  257842 out.go:374] Setting ErrFile to fd 2...
	I1119 22:33:23.883997  257842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:23.884187  257842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:33:23.884673  257842 out.go:368] Setting JSON to false
	I1119 22:33:23.885756  257842 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4552,"bootTime":1763587052,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:33:23.885849  257842 start.go:143] virtualization: kvm guest
	I1119 22:33:23.887726  257842 out.go:179] * [default-k8s-diff-port-409987] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:33:23.889070  257842 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:33:23.889070  257842 notify.go:221] Checking for updates...
	I1119 22:33:23.891485  257842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:33:23.892734  257842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:33:23.893909  257842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:33:23.895062  257842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:33:23.896153  257842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:33:23.897750  257842 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.897897  257842 config.go:182] Loaded profile config "kubernetes-upgrade-801704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.898024  257842 config.go:182] Loaded profile config "no-preload-178067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.898147  257842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:33:23.925695  257842 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:33:23.925842  257842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:23.983931  257842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:23.974160621 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:23.984034  257842 docker.go:319] overlay module found
	I1119 22:33:23.985686  257842 out.go:179] * Using the docker driver based on user configuration
	I1119 22:33:23.986806  257842 start.go:309] selected driver: docker
	I1119 22:33:23.986842  257842 start.go:930] validating driver "docker" against <nil>
	I1119 22:33:23.986855  257842 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:33:23.987349  257842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:24.044957  257842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:24.035470502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:24.045358  257842 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:33:24.045644  257842 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:33:24.047182  257842 out.go:179] * Using Docker driver with root privileges
	I1119 22:33:24.048300  257842 cni.go:84] Creating CNI manager for ""
	I1119 22:33:24.048398  257842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:24.048413  257842 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:33:24.048479  257842 start.go:353] cluster config:
	{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:24.049668  257842 out.go:179] * Starting "default-k8s-diff-port-409987" primary control-plane node in "default-k8s-diff-port-409987" cluster
	I1119 22:33:24.050617  257842 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:33:24.051685  257842 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:33:24.052672  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:24.052710  257842 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:33:24.052717  257842 cache.go:65] Caching tarball of preloaded images
	I1119 22:33:24.052766  257842 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:33:24.052856  257842 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:33:24.052873  257842 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:33:24.052980  257842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:33:24.053013  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json: {Name:mkd16b9878826f2245b2c07a772bd12235442172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:24.072676  257842 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:33:24.072691  257842 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:33:24.072705  257842 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:33:24.072727  257842 start.go:360] acquireMachinesLock for default-k8s-diff-port-409987: {Name:mk3691865877e78ad0fe52d2c0e71ee1c1c3699a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:33:24.072831  257842 start.go:364] duration metric: took 71.579µs to acquireMachinesLock for "default-k8s-diff-port-409987"
	I1119 22:33:24.072860  257842 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:33:24.072935  257842 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:33:21.846845  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:22.347017  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:22.847034  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:23.346898  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:23.846436  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:24.346943  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:24.846671  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:25.346975  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:25.420030  252325 kubeadm.go:1114] duration metric: took 4.651844422s to wait for elevateKubeSystemPrivileges
	I1119 22:33:25.420066  252325 kubeadm.go:403] duration metric: took 14.384664171s to StartCluster
	I1119 22:33:25.420088  252325 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:25.420154  252325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:33:25.422122  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:25.422376  252325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:33:25.422394  252325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:33:25.422458  252325 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:33:25.422555  252325 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-443380"
	I1119 22:33:25.422587  252325 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-443380"
	I1119 22:33:25.422585  252325 addons.go:70] Setting default-storageclass=true in profile "embed-certs-443380"
	I1119 22:33:25.422605  252325 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:25.422616  252325 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-443380"
	I1119 22:33:25.422620  252325 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:33:25.423009  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.423154  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.425572  252325 out.go:179] * Verifying Kubernetes components...
	I1119 22:33:25.427178  252325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:25.446337  252325 addons.go:239] Setting addon default-storageclass=true in "embed-certs-443380"
	I1119 22:33:25.446384  252325 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:33:25.446890  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.448940  252325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:33:25.450228  252325 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:33:25.450251  252325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:33:25.450306  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:25.480574  252325 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:33:25.480600  252325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:33:25.480661  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:25.481387  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:25.506078  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:25.523359  252325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:33:25.586976  252325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:33:25.611710  252325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:33:25.635667  252325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:33:25.747803  252325 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:33:25.750001  252325 node_ready.go:35] waiting up to 6m0s for node "embed-certs-443380" to be "Ready" ...
	I1119 22:33:25.969838  252325 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:33:25.970910  252325 addons.go:515] duration metric: took 548.451841ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:33:26.253634  252325 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-443380" context rescaled to 1 replicas
	I1119 22:33:22.382769  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:22.383154  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:22.383202  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:22.383251  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:22.412635  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:22.412654  229026 cri.go:89] found id: ""
	I1119 22:33:22.412662  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:22.412702  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.416473  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:22.416531  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:22.442074  229026 cri.go:89] found id: ""
	I1119 22:33:22.442093  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.442100  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:22.442105  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:22.442152  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:22.467611  229026 cri.go:89] found id: ""
	I1119 22:33:22.467633  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.467641  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:22.467648  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:22.467703  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:22.494154  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:22.494172  229026 cri.go:89] found id: ""
	I1119 22:33:22.494180  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:22.494229  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.497892  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:22.497950  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:22.523686  229026 cri.go:89] found id: ""
	I1119 22:33:22.523711  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.523720  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:22.523729  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:22.523785  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:22.549770  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:22.549794  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:22.549799  229026 cri.go:89] found id: ""
	I1119 22:33:22.549810  229026 logs.go:282] 2 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:33:22.549889  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.554433  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.558149  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:22.558194  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:22.594272  229026 cri.go:89] found id: ""
	I1119 22:33:22.594299  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.594309  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:22.594317  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:22.594359  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:22.625976  229026 cri.go:89] found id: ""
	I1119 22:33:22.626001  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.626012  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:22.626027  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:22.626038  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:22.660094  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:22.660123  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:22.676931  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:22.676957  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:22.733420  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:22.733439  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:22.733450  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:22.765920  229026 logs.go:123] Gathering logs for kube-controller-manager [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a] ...
	I1119 22:33:22.765952  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:22.791770  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:22.791795  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:22.832968  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:22.832994  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:22.920507  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:22.920540  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:22.985203  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:22.985241  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.512901  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:25.514058  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:25.514118  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:25.514214  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:25.556844  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:25.556876  229026 cri.go:89] found id: ""
	I1119 22:33:25.556887  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:25.556952  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.562892  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:25.562953  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:25.605067  229026 cri.go:89] found id: ""
	I1119 22:33:25.605124  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.605136  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:25.605145  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:25.605204  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:25.644356  229026 cri.go:89] found id: ""
	I1119 22:33:25.644385  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.644395  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:25.644403  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:25.644460  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:25.683152  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:25.683178  229026 cri.go:89] found id: ""
	I1119 22:33:25.683273  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:25.683342  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.688089  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:25.688208  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:25.725026  229026 cri.go:89] found id: ""
	I1119 22:33:25.725056  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.725065  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:25.725073  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:25.725244  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:25.761160  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.761204  229026 cri.go:89] found id: ""
	I1119 22:33:25.761216  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:25.761282  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.766966  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:25.767028  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:25.804510  229026 cri.go:89] found id: ""
	I1119 22:33:25.804540  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.804551  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:25.804559  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:25.804622  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:25.837652  229026 cri.go:89] found id: ""
	I1119 22:33:25.837679  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.837701  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:25.837712  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:25.837726  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:25.892405  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:25.892441  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.927183  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:25.927223  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:25.982585  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:25.982613  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:26.013887  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:26.013923  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:26.098577  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:26.098611  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:26.115217  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:26.115244  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:26.178958  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:26.178984  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:26.179005  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	W1119 22:33:23.608027  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:25.612411  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:28.107283  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	I1119 22:33:24.074503  257842 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:33:24.074696  257842 start.go:159] libmachine.API.Create for "default-k8s-diff-port-409987" (driver="docker")
	I1119 22:33:24.074724  257842 client.go:173] LocalClient.Create starting
	I1119 22:33:24.074791  257842 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem
	I1119 22:33:24.074871  257842 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:24.074891  257842 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:24.074944  257842 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem
	I1119 22:33:24.074966  257842 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:24.074977  257842 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:24.075254  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:33:24.091285  257842 cli_runner.go:211] docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:33:24.091350  257842 network_create.go:284] running [docker network inspect default-k8s-diff-port-409987] to gather additional debugging logs...
	I1119 22:33:24.091365  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987
	W1119 22:33:24.108545  257842 cli_runner.go:211] docker network inspect default-k8s-diff-port-409987 returned with exit code 1
	I1119 22:33:24.108572  257842 network_create.go:287] error running [docker network inspect default-k8s-diff-port-409987]: docker network inspect default-k8s-diff-port-409987: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-409987 not found
	I1119 22:33:24.108587  257842 network_create.go:289] output of [docker network inspect default-k8s-diff-port-409987]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-409987 not found
	
	** /stderr **
	I1119 22:33:24.108708  257842 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:24.125616  257842 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cde0f356bd10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b5:fa:ba:e0:a6} reservation:<nil>}
	I1119 22:33:24.126341  257842 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-47fb5ce24a02 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:30:91:0e:d6:d9} reservation:<nil>}
	I1119 22:33:24.127005  257842 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2592199ffac9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:9b:dd:65:07:28} reservation:<nil>}
	I1119 22:33:24.127748  257842 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f40680}
	I1119 22:33:24.127768  257842 network_create.go:124] attempt to create docker network default-k8s-diff-port-409987 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 22:33:24.127824  257842 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 default-k8s-diff-port-409987
	I1119 22:33:24.174801  257842 network_create.go:108] docker network default-k8s-diff-port-409987 192.168.76.0/24 created
	I1119 22:33:24.174930  257842 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-409987" container
	I1119 22:33:24.174986  257842 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:33:24.193121  257842 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-409987 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:33:24.209597  257842 oci.go:103] Successfully created a docker volume default-k8s-diff-port-409987
	I1119 22:33:24.209672  257842 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-409987-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --entrypoint /usr/bin/test -v default-k8s-diff-port-409987:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:33:24.605177  257842 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-409987
	I1119 22:33:24.605252  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:24.605267  257842 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:33:24.605340  257842 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-409987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:33:29.133052  247081 pod_ready.go:94] pod "coredns-66bc5c9577-9dwxr" is "Ready"
	I1119 22:33:29.133080  247081 pod_ready.go:86] duration metric: took 39.530851945s for pod "coredns-66bc5c9577-9dwxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.138098  247081 pod_ready.go:83] waiting for pod "etcd-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.142937  247081 pod_ready.go:94] pod "etcd-no-preload-178067" is "Ready"
	I1119 22:33:29.142962  247081 pod_ready.go:86] duration metric: took 4.839499ms for pod "etcd-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.238949  247081 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.244009  247081 pod_ready.go:94] pod "kube-apiserver-no-preload-178067" is "Ready"
	I1119 22:33:29.244037  247081 pod_ready.go:86] duration metric: took 5.06142ms for pod "kube-apiserver-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.246567  247081 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.305183  247081 pod_ready.go:94] pod "kube-controller-manager-no-preload-178067" is "Ready"
	I1119 22:33:29.305208  247081 pod_ready.go:86] duration metric: took 58.619262ms for pod "kube-controller-manager-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.504991  247081 pod_ready.go:83] waiting for pod "kube-proxy-xll6z" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.905540  247081 pod_ready.go:94] pod "kube-proxy-xll6z" is "Ready"
	I1119 22:33:29.905566  247081 pod_ready.go:86] duration metric: took 400.551202ms for pod "kube-proxy-xll6z" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.105246  247081 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.505433  247081 pod_ready.go:94] pod "kube-scheduler-no-preload-178067" is "Ready"
	I1119 22:33:30.505459  247081 pod_ready.go:86] duration metric: took 400.188275ms for pod "kube-scheduler-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.505470  247081 pod_ready.go:40] duration metric: took 40.906421291s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:30.547626  247081 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:33:30.549623  247081 out.go:179] * Done! kubectl is now configured to use "no-preload-178067" cluster and "default" namespace by default
	W1119 22:33:27.844382  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	W1119 22:33:30.253097  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	I1119 22:33:28.710624  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:28.711065  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:28.711113  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:28.711160  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:28.736722  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:28.736744  229026 cri.go:89] found id: ""
	I1119 22:33:28.736752  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:28.736803  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.741111  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:28.741177  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:28.766295  229026 cri.go:89] found id: ""
	I1119 22:33:28.766319  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.766327  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:28.766333  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:28.766378  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:28.791972  229026 cri.go:89] found id: ""
	I1119 22:33:28.791994  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.792001  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:28.792006  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:28.792056  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:28.818307  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:28.818327  229026 cri.go:89] found id: ""
	I1119 22:33:28.818335  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:28.818394  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.822683  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:28.822764  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:28.856448  229026 cri.go:89] found id: ""
	I1119 22:33:28.856499  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.856510  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:28.856518  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:28.856580  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:28.882557  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:28.882584  229026 cri.go:89] found id: ""
	I1119 22:33:28.882592  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:28.882645  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.886479  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:28.886545  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:28.912563  229026 cri.go:89] found id: ""
	I1119 22:33:28.912588  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.912595  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:28.912601  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:28.912644  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:28.937277  229026 cri.go:89] found id: ""
	I1119 22:33:28.937299  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.937306  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:28.937315  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:28.937326  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:28.966343  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:28.966368  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:29.014708  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:29.014743  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:29.040387  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:29.040411  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:29.082359  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:29.082390  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:29.111167  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:29.111194  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:29.215828  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:29.215865  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:29.230491  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:29.230519  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:29.295659  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:29.158194  257842 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-409987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.55281447s)
	I1119 22:33:29.158220  257842 kic.go:203] duration metric: took 4.552950236s to extract preloaded images to volume ...
	W1119 22:33:29.158286  257842 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:33:29.158312  257842 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:33:29.158344  257842 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:33:29.217611  257842 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-409987 --name default-k8s-diff-port-409987 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --network default-k8s-diff-port-409987 --ip 192.168.76.2 --volume default-k8s-diff-port-409987:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:33:29.532541  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Running}}
	I1119 22:33:29.551244  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.569223  257842 cli_runner.go:164] Run: docker exec default-k8s-diff-port-409987 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:33:29.614972  257842 oci.go:144] the created container "default-k8s-diff-port-409987" has a running status.
	I1119 22:33:29.614999  257842 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa...
	I1119 22:33:29.811803  257842 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:33:29.835714  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.852802  257842 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:33:29.852845  257842 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-409987 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:33:29.895797  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.913061  257842 machine.go:94] provisionDockerMachine start ...
	I1119 22:33:29.913137  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:29.929995  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:29.930308  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:29.930328  257842 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:33:29.931145  257842 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54386->127.0.0.1:33078: read: connection reset by peer
	I1119 22:33:33.055705  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:33:33.055755  257842 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-409987"
	I1119 22:33:33.055830  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.073640  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.073912  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.073935  257842 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-409987 && echo "default-k8s-diff-port-409987" | sudo tee /etc/hostname
	I1119 22:33:33.206352  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:33:33.206423  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.224632  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.224930  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.224968  257842 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-409987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-409987/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-409987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:33:33.347746  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:33:33.347776  257842 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:33:33.347844  257842 ubuntu.go:190] setting up certificates
	I1119 22:33:33.347867  257842 provision.go:84] configureAuth start
	I1119 22:33:33.347925  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:33.365007  257842 provision.go:143] copyHostCerts
	I1119 22:33:33.365064  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:33:33.365077  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:33:33.365153  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:33:33.365253  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:33:33.365265  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:33:33.365299  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:33:33.365384  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:33:33.365393  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:33:33.365439  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:33:33.365514  257842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-409987 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-409987 localhost minikube]
	I1119 22:33:33.469295  257842 provision.go:177] copyRemoteCerts
	I1119 22:33:33.469350  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:33:33.469399  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.487180  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:33.579229  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:33:33.598170  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:33:33.615332  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:33:33.631707  257842 provision.go:87] duration metric: took 283.825271ms to configureAuth
	I1119 22:33:33.631738  257842 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:33:33.631927  257842 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:33.632038  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.649525  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.649754  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.649776  257842 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:33:33.911864  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:33:33.911899  257842 machine.go:97] duration metric: took 3.998818366s to provisionDockerMachine
	I1119 22:33:33.911921  257842 client.go:176] duration metric: took 9.837189219s to LocalClient.Create
	I1119 22:33:33.911944  257842 start.go:167] duration metric: took 9.837246112s to libmachine.API.Create "default-k8s-diff-port-409987"
	I1119 22:33:33.911958  257842 start.go:293] postStartSetup for "default-k8s-diff-port-409987" (driver="docker")
	I1119 22:33:33.911972  257842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:33:33.912049  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:33:33.912100  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.930567  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.023978  257842 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:33:34.027239  257842 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:33:34.027262  257842 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:33:34.027271  257842 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:33:34.027334  257842 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:33:34.027439  257842 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:33:34.027574  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:33:34.034703  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:34.053030  257842 start.go:296] duration metric: took 141.059047ms for postStartSetup
	I1119 22:33:34.053328  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:34.071401  257842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:33:34.071655  257842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:33:34.071702  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.089393  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.179354  257842 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:33:34.184028  257842 start.go:128] duration metric: took 10.111081087s to createHost
	I1119 22:33:34.184050  257842 start.go:83] releasing machines lock for "default-k8s-diff-port-409987", held for 10.111205257s
	I1119 22:33:34.184110  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:34.201570  257842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:33:34.201588  257842 ssh_runner.go:195] Run: cat /version.json
	I1119 22:33:34.201638  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.201643  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.219778  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.220185  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.307356  257842 ssh_runner.go:195] Run: systemctl --version
	I1119 22:33:34.377088  257842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:33:34.409301  257842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:33:34.413564  257842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:33:34.413625  257842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:33:34.439025  257842 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:33:34.439049  257842 start.go:496] detecting cgroup driver to use...
	I1119 22:33:34.439080  257842 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:33:34.439115  257842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:33:34.453624  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:33:34.464939  257842 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:33:34.464985  257842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:33:34.480085  257842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:33:34.496141  257842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:33:34.577139  257842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:33:34.661485  257842 docker.go:234] disabling docker service ...
	I1119 22:33:34.661548  257842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:33:34.680544  257842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:33:34.693829  257842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:33:34.778614  257842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:33:34.863617  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:33:34.876075  257842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:33:34.890553  257842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:33:34.890610  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.901356  257842 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:33:34.901423  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.910601  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.920150  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.929306  257842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:33:34.937318  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.946730  257842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.960309  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.968769  257842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:33:34.977040  257842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:33:34.984350  257842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:35.075418  257842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:33:35.218176  257842 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:33:35.218239  257842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:33:35.221998  257842 start.go:564] Will wait 60s for crictl version
	I1119 22:33:35.222046  257842 ssh_runner.go:195] Run: which crictl
	I1119 22:33:35.225560  257842 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:33:35.248793  257842 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:33:35.248876  257842 ssh_runner.go:195] Run: crio --version
	I1119 22:33:35.277023  257842 ssh_runner.go:195] Run: crio --version
	I1119 22:33:35.307857  257842 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1119 22:33:32.253373  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	W1119 22:33:34.754649  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	I1119 22:33:31.796780  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:31.797236  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:31.797296  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:31.797357  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:31.822313  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:31.822330  229026 cri.go:89] found id: ""
	I1119 22:33:31.822337  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:31.822381  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.825911  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:31.825967  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:31.851805  229026 cri.go:89] found id: ""
	I1119 22:33:31.851852  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.851859  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:31.851864  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:31.851918  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:31.877079  229026 cri.go:89] found id: ""
	I1119 22:33:31.877100  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.877107  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:31.877113  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:31.877160  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:31.901847  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:31.901864  229026 cri.go:89] found id: ""
	I1119 22:33:31.901871  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:31.901909  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.906013  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:31.906067  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:31.930107  229026 cri.go:89] found id: ""
	I1119 22:33:31.930128  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.930137  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:31.930144  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:31.930183  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:31.954253  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:31.954272  229026 cri.go:89] found id: ""
	I1119 22:33:31.954291  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:31.954347  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.957894  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:31.957950  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:31.982148  229026 cri.go:89] found id: ""
	I1119 22:33:31.982171  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.982181  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:31.982187  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:31.982232  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:32.010776  229026 cri.go:89] found id: ""
	I1119 22:33:32.010801  229026 logs.go:282] 0 containers: []
	W1119 22:33:32.010809  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:32.010835  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:32.010850  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:32.036144  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:32.036167  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:32.078660  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:32.078684  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:32.106831  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:32.106857  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:32.189849  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:32.189874  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:32.203302  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:32.203326  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:32.257080  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:32.257098  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:32.257112  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:32.289358  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:32.289436  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:34.836503  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:34.836865  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:34.836919  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:34.836974  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:34.864697  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:34.864716  229026 cri.go:89] found id: ""
	I1119 22:33:34.864726  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:34.864788  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:34.868370  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:34.868423  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:34.894465  229026 cri.go:89] found id: ""
	I1119 22:33:34.894487  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.894498  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:34.894505  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:34.894555  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:34.922777  229026 cri.go:89] found id: ""
	I1119 22:33:34.922798  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.922810  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:34.922835  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:34.922886  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:34.949441  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:34.949462  229026 cri.go:89] found id: ""
	I1119 22:33:34.949471  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:34.949515  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:34.952986  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:34.953034  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:34.978855  229026 cri.go:89] found id: ""
	I1119 22:33:34.978885  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.978896  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:34.978905  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:34.978956  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:35.004626  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:35.004650  229026 cri.go:89] found id: ""
	I1119 22:33:35.004658  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:35.004709  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:35.008905  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:35.008961  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:35.039110  229026 cri.go:89] found id: ""
	I1119 22:33:35.039132  229026 logs.go:282] 0 containers: []
	W1119 22:33:35.039141  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:35.039149  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:35.039202  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:35.065661  229026 cri.go:89] found id: ""
	I1119 22:33:35.065694  229026 logs.go:282] 0 containers: []
	W1119 22:33:35.065705  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:35.065719  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:35.065741  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:35.095020  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:35.095050  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:35.143773  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:35.143802  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:35.174044  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:35.174078  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:35.265375  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:35.265400  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:35.280716  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:35.280744  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:35.339887  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:35.339905  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:35.339919  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:35.375008  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:35.375028  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:35.308950  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:35.327275  257842 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:33:35.331352  257842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:35.342840  257842 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:33:35.343008  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:35.343065  257842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:35.374136  257842 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:35.374157  257842 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:33:35.374203  257842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:35.399179  257842 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:35.399198  257842 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:33:35.399205  257842 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1119 22:33:35.399280  257842 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-409987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:33:35.399339  257842 ssh_runner.go:195] Run: crio config
	I1119 22:33:35.444494  257842 cni.go:84] Creating CNI manager for ""
	I1119 22:33:35.444513  257842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:35.444528  257842 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:33:35.444547  257842 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-409987 NodeName:default-k8s-diff-port-409987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:33:35.444673  257842 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-409987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:33:35.444731  257842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:33:35.452420  257842 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:33:35.452477  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:33:35.459942  257842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 22:33:35.471786  257842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:33:35.486354  257842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 22:33:35.497770  257842 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:33:35.501361  257842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:35.510565  257842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:35.589911  257842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:33:35.612829  257842 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987 for IP: 192.168.76.2
	I1119 22:33:35.612849  257842 certs.go:195] generating shared ca certs ...
	I1119 22:33:35.612868  257842 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:35.613005  257842 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:33:35.613069  257842 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:33:35.613084  257842 certs.go:257] generating profile certs ...
	I1119 22:33:35.613150  257842 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key
	I1119 22:33:35.613176  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt with IP's: []
	I1119 22:33:36.259839  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt ...
	I1119 22:33:36.259864  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt: {Name:mk51645faa5989875e782e359a15271baba6c64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.260055  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key ...
	I1119 22:33:36.260072  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key: {Name:mkcbdf4025b10d73f6acb70bea0cad4aaaa9a2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.260192  257842 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832
	I1119 22:33:36.260218  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:33:36.935157  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 ...
	I1119 22:33:36.935185  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832: {Name:mka229d41a2be07fe6a31ff8c42ef5ff6a82a36c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.935348  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832 ...
	I1119 22:33:36.935366  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832: {Name:mk46e2ff9da97b96045d25f2b413ce78625d779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.935473  257842 certs.go:382] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt
	I1119 22:33:36.935578  257842 certs.go:386] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key
	I1119 22:33:36.935666  257842 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key
	I1119 22:33:36.935689  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt with IP's: []
	I1119 22:33:37.249125  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt ...
	I1119 22:33:37.249156  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt: {Name:mkea403caf60bc3ff91af8eead4c159ce9fb0ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:37.249328  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key ...
	I1119 22:33:37.249343  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key: {Name:mk241b53e3e9b76398e3ef0e5e4da30803b4e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:37.249518  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:33:37.249551  257842 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:33:37.249561  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:33:37.249581  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:33:37.249602  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:33:37.249623  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:33:37.249663  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:37.250283  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:33:37.269810  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:33:37.288148  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:33:37.304313  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:33:37.320191  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:33:37.336074  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:33:37.352096  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:33:37.367943  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:33:37.383751  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:33:37.401518  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:33:37.417219  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:33:37.433104  257842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:33:37.444411  257842 ssh_runner.go:195] Run: openssl version
	I1119 22:33:37.449967  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:33:37.457635  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.460958  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.461009  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.495263  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:33:37.503611  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:33:37.511484  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.515181  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.515229  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.549064  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:33:37.557207  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:33:37.565171  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.568456  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.568497  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.602554  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:33:37.610380  257842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:33:37.613736  257842 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:33:37.613789  257842 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:37.613885  257842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:33:37.613954  257842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:33:37.640740  257842 cri.go:89] found id: ""
	I1119 22:33:37.640811  257842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:33:37.648414  257842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:33:37.655868  257842 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:33:37.655906  257842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:33:37.662990  257842 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:33:37.663005  257842 kubeadm.go:158] found existing configuration files:
	
	I1119 22:33:37.663036  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:33:37.670124  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:33:37.670173  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:33:37.677380  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:33:37.686619  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:33:37.686669  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:33:37.693439  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:33:37.700485  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:33:37.700520  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:33:37.707378  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:33:37.714951  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:33:37.714984  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:33:37.721780  257842 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:33:37.759244  257842 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:33:37.759294  257842 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:33:37.786995  257842 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:33:37.787082  257842 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:33:37.787129  257842 kubeadm.go:319] OS: Linux
	I1119 22:33:37.787187  257842 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:33:37.787260  257842 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:33:37.787357  257842 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:33:37.787443  257842 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:33:37.787529  257842 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:33:37.787609  257842 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:33:37.787686  257842 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:33:37.787779  257842 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:33:37.851453  257842 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:33:37.851600  257842 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:33:37.851724  257842 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:33:37.860973  257842 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:33:37.862911  257842 out.go:252]   - Generating certificates and keys ...
	I1119 22:33:37.863031  257842 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:33:37.863132  257842 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:33:37.987676  257842 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:33:38.117107  257842 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:33:38.304291  257842 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:33:38.419481  257842 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:33:38.673629  257842 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:33:38.673787  257842 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-409987 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:33:38.716286  257842 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:33:38.716448  257842 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-409987 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:33:38.841539  257842 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:33:37.253123  252325 node_ready.go:49] node "embed-certs-443380" is "Ready"
	I1119 22:33:37.253146  252325 node_ready.go:38] duration metric: took 11.503113839s for node "embed-certs-443380" to be "Ready" ...
	I1119 22:33:37.253158  252325 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:33:37.253193  252325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:33:37.264416  252325 api_server.go:72] duration metric: took 11.841983624s to wait for apiserver process to appear ...
	I1119 22:33:37.264435  252325 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:33:37.264448  252325 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:33:37.269949  252325 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:33:37.270720  252325 api_server.go:141] control plane version: v1.34.1
	I1119 22:33:37.270741  252325 api_server.go:131] duration metric: took 6.29992ms to wait for apiserver health ...
	I1119 22:33:37.270748  252325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:33:37.273648  252325 system_pods.go:59] 8 kube-system pods found
	I1119 22:33:37.273681  252325 system_pods.go:61] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.273687  252325 system_pods.go:61] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.273692  252325 system_pods.go:61] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.273695  252325 system_pods.go:61] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.273699  252325 system_pods.go:61] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.273702  252325 system_pods.go:61] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.273705  252325 system_pods.go:61] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.273710  252325 system_pods.go:61] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.273719  252325 system_pods.go:74] duration metric: took 2.966347ms to wait for pod list to return data ...
	I1119 22:33:37.273726  252325 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:33:37.275697  252325 default_sa.go:45] found service account: "default"
	I1119 22:33:37.275714  252325 default_sa.go:55] duration metric: took 1.983922ms for default service account to be created ...
	I1119 22:33:37.275722  252325 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:33:37.278323  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.278347  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.278357  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.278362  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.278366  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.278370  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.278373  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.278376  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.278380  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.278397  252325 retry.go:31] will retry after 216.008228ms: missing components: kube-dns
	I1119 22:33:37.498308  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.498341  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.498349  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.498359  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.498366  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.498373  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.498379  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.498384  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.498396  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.498412  252325 retry.go:31] will retry after 271.433631ms: missing components: kube-dns
	I1119 22:33:37.773981  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.774011  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.774024  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.774029  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.774033  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.774037  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.774040  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.774043  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.774048  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.774061  252325 retry.go:31] will retry after 422.422645ms: missing components: kube-dns
	I1119 22:33:38.201323  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:38.201351  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Running
	I1119 22:33:38.201358  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:38.201364  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:38.201370  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:38.201377  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:38.201382  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:38.201387  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:38.201392  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Running
	I1119 22:33:38.201410  252325 system_pods.go:126] duration metric: took 925.672892ms to wait for k8s-apps to be running ...
	I1119 22:33:38.201420  252325 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:33:38.201470  252325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:38.215425  252325 system_svc.go:56] duration metric: took 13.999039ms WaitForService to wait for kubelet
	I1119 22:33:38.215452  252325 kubeadm.go:587] duration metric: took 12.793019797s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:33:38.215473  252325 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:33:38.218207  252325 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:33:38.218230  252325 node_conditions.go:123] node cpu capacity is 8
	I1119 22:33:38.218241  252325 node_conditions.go:105] duration metric: took 2.763018ms to run NodePressure ...
	I1119 22:33:38.218255  252325 start.go:242] waiting for startup goroutines ...
	I1119 22:33:38.218268  252325 start.go:247] waiting for cluster config update ...
	I1119 22:33:38.218285  252325 start.go:256] writing updated cluster config ...
	I1119 22:33:38.218604  252325 ssh_runner.go:195] Run: rm -f paused
	I1119 22:33:38.222676  252325 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:38.226257  252325 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jmjmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.230497  252325 pod_ready.go:94] pod "coredns-66bc5c9577-jmjmf" is "Ready"
	I1119 22:33:38.230536  252325 pod_ready.go:86] duration metric: took 4.244524ms for pod "coredns-66bc5c9577-jmjmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.232661  252325 pod_ready.go:83] waiting for pod "etcd-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.236397  252325 pod_ready.go:94] pod "etcd-embed-certs-443380" is "Ready"
	I1119 22:33:38.236416  252325 pod_ready.go:86] duration metric: took 3.737265ms for pod "etcd-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.238310  252325 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.242981  252325 pod_ready.go:94] pod "kube-apiserver-embed-certs-443380" is "Ready"
	I1119 22:33:38.242999  252325 pod_ready.go:86] duration metric: took 4.670826ms for pod "kube-apiserver-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.244923  252325 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.627463  252325 pod_ready.go:94] pod "kube-controller-manager-embed-certs-443380" is "Ready"
	I1119 22:33:38.627488  252325 pod_ready.go:86] duration metric: took 382.549793ms for pod "kube-controller-manager-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.827739  252325 pod_ready.go:83] waiting for pod "kube-proxy-r5xtg" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.226142  252325 pod_ready.go:94] pod "kube-proxy-r5xtg" is "Ready"
	I1119 22:33:39.226169  252325 pod_ready.go:86] duration metric: took 398.408001ms for pod "kube-proxy-r5xtg" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.427580  252325 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.827781  252325 pod_ready.go:94] pod "kube-scheduler-embed-certs-443380" is "Ready"
	I1119 22:33:39.827836  252325 pod_ready.go:86] duration metric: took 400.201717ms for pod "kube-scheduler-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.827853  252325 pod_ready.go:40] duration metric: took 1.605146507s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:39.871483  252325 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:33:39.873526  252325 out.go:179] * Done! kubectl is now configured to use "embed-certs-443380" cluster and "default" namespace by default
	I1119 22:33:39.323985  257842 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:33:39.442549  257842 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:33:39.442737  257842 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:33:39.627688  257842 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:33:40.036493  257842 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:33:40.698146  257842 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:33:40.961731  257842 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:33:41.149359  257842 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:33:41.150288  257842 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:33:41.154317  257842 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:33:37.929115  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:37.929464  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:37.929520  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:37.929565  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:37.955356  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:37.955379  229026 cri.go:89] found id: ""
	I1119 22:33:37.955388  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:37.955438  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:37.959319  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:37.959393  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:37.984434  229026 cri.go:89] found id: ""
	I1119 22:33:37.984458  229026 logs.go:282] 0 containers: []
	W1119 22:33:37.984468  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:37.984475  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:37.984526  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:38.012164  229026 cri.go:89] found id: ""
	I1119 22:33:38.012190  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.012199  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:38.012204  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:38.012285  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:38.036173  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:38.036195  229026 cri.go:89] found id: ""
	I1119 22:33:38.036205  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:38.036257  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:38.039850  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:38.039898  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:38.064432  229026 cri.go:89] found id: ""
	I1119 22:33:38.064452  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.064461  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:38.064467  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:38.064514  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:38.090526  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:38.090548  229026 cri.go:89] found id: ""
	I1119 22:33:38.090557  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:38.090607  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:38.094245  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:38.094302  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:38.121462  229026 cri.go:89] found id: ""
	I1119 22:33:38.121481  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.121491  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:38.121498  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:38.121549  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:38.146752  229026 cri.go:89] found id: ""
	I1119 22:33:38.146772  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.146778  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:38.146787  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:38.146796  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:38.196010  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:38.196033  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:38.223390  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:38.223411  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:38.270213  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:38.270241  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:38.299662  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:38.299691  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:38.386912  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:38.386944  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:38.400305  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:38.400339  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:38.455714  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:38.455731  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:38.455743  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:40.987565  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:40.987943  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:40.987996  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:40.988049  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:41.016569  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:41.016586  229026 cri.go:89] found id: ""
	I1119 22:33:41.016593  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:41.016633  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.020316  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:41.020366  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:41.046437  229026 cri.go:89] found id: ""
	I1119 22:33:41.046457  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.046463  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:41.046468  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:41.046529  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:41.072677  229026 cri.go:89] found id: ""
	I1119 22:33:41.072701  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.072711  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:41.072719  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:41.072769  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:41.099927  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:41.099949  229026 cri.go:89] found id: ""
	I1119 22:33:41.099959  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:41.100014  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.104773  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:41.104852  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:41.139008  229026 cri.go:89] found id: ""
	I1119 22:33:41.139034  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.139043  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:41.139051  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:41.139109  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:41.170661  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:41.170688  229026 cri.go:89] found id: ""
	I1119 22:33:41.170706  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:41.170763  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.174802  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:41.174872  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:41.209289  229026 cri.go:89] found id: ""
	I1119 22:33:41.209313  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.209323  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:41.209330  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:41.209383  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:41.248091  229026 cri.go:89] found id: ""
	I1119 22:33:41.248112  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.248119  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:41.248128  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:41.248139  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:41.341775  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:41.341806  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:41.355629  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:41.355651  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:41.412102  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:41.412120  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:41.412132  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:41.440857  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:41.440882  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:41.488518  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:41.488550  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:41.514120  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:41.514142  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:41.156021  257842 out.go:252]   - Booting up control plane ...
	I1119 22:33:41.156147  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:33:41.156248  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:33:41.157952  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:33:41.175709  257842 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:33:41.175884  257842 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:33:41.184676  257842 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:33:41.185076  257842 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:33:41.185168  257842 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:33:41.291554  257842 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:33:41.291689  257842 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:33:41.793221  257842 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.701541ms
	I1119 22:33:41.796084  257842 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:33:41.796211  257842 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1119 22:33:41.796352  257842 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:33:41.796490  257842 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:33:43.308442  257842 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.512357875s
	
	
	==> CRI-O <==
	Nov 19 22:33:06 no-preload-178067 crio[580]: time="2025-11-19T22:33:06.118060029Z" level=info msg="Started container" PID=1790 containerID=323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper id=3b85e31c-0462-4d80-99c3-99a9d3c6e25a name=/runtime.v1.RuntimeService/StartContainer sandboxID=90007c75ed69c8a90e6e3581234ee52a9e54f5b1fc947d2e0799377a0886fdd6
	Nov 19 22:33:06 no-preload-178067 crio[580]: time="2025-11-19T22:33:06.632025427Z" level=info msg="Removing container: 880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc" id=60002f2a-7f76-4c8b-8805-18197b45e34c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:33:06 no-preload-178067 crio[580]: time="2025-11-19T22:33:06.654513598Z" level=info msg="Removed container 880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper" id=60002f2a-7f76-4c8b-8805-18197b45e34c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.666097372Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a8aa22cc-3615-4904-9409-93a69fdfdf2f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.667519365Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a850c217-d407-4ed2-aa91-797cd2cdfb25 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.668702679Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2721d545-82bc-4b4d-8bac-989543fb4ad6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.668870612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.67433721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.674521293Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e87b2122d6da93f4dac3cee3d32b7054b97ad3b896902e000553ed502d8280db/merged/etc/passwd: no such file or directory"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.674549455Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e87b2122d6da93f4dac3cee3d32b7054b97ad3b896902e000553ed502d8280db/merged/etc/group: no such file or directory"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.674788354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.701026321Z" level=info msg="Created container ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048: kube-system/storage-provisioner/storage-provisioner" id=2721d545-82bc-4b4d-8bac-989543fb4ad6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.702523333Z" level=info msg="Starting container: ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048" id=3df70c4d-e68b-4819-a4cf-0bf924d0d7e2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.704878351Z" level=info msg="Started container" PID=1804 containerID=ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048 description=kube-system/storage-provisioner/storage-provisioner id=3df70c4d-e68b-4819-a4cf-0bf924d0d7e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5dace2361c932931e9efb907fe7f6efa98b7d0bda47515b5fddb3f88c5ba5e5a
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.555212574Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dd310b70-b261-46dd-bfa8-4ff7bf6e57e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.578393634Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0779120a-e5ba-4bf7-966e-616a309a8a88 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.579378994Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper" id=fdecc645-45c7-4b97-b557-662ce7d4ad15 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.579501358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.668696656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.669157066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.822302944Z" level=info msg="Created container 1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper" id=fdecc645-45c7-4b97-b557-662ce7d4ad15 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.823047809Z" level=info msg="Starting container: 1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed" id=639b5ca2-675e-4436-a170-6e55164ca5b4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.825336636Z" level=info msg="Started container" PID=1819 containerID=1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper id=639b5ca2-675e-4436-a170-6e55164ca5b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90007c75ed69c8a90e6e3581234ee52a9e54f5b1fc947d2e0799377a0886fdd6
	Nov 19 22:33:29 no-preload-178067 crio[580]: time="2025-11-19T22:33:29.693120161Z" level=info msg="Removing container: 323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1" id=2b68eeca-aa7f-4a99-801b-084fbaf49db9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:33:29 no-preload-178067 crio[580]: time="2025-11-19T22:33:29.703030853Z" level=info msg="Removed container 323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper" id=2b68eeca-aa7f-4a99-801b-084fbaf49db9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1a13a9b1ed1ea       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   3                   90007c75ed69c       dashboard-metrics-scraper-6ffb444bf9-s7v5b   kubernetes-dashboard
	ca69bb794dfbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   5dace2361c932       storage-provisioner                          kube-system
	1b322960c77f5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   83a06a191146d       kubernetes-dashboard-855c9754f9-c59j5        kubernetes-dashboard
	5d9a3926452fe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   ba7fe49028090       coredns-66bc5c9577-9dwxr                     kube-system
	7b9d1041c9ea2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   77392b0f52c77       busybox                                      default
	63b4d5c69223f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   5dace2361c932       storage-provisioner                          kube-system
	c4eb1fb19b099       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   c3c61b0a43b65       kindnet-4rclw                                kube-system
	86197cbc9c40e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   2490870fe085f       kube-proxy-xll6z                             kube-system
	2b0da6046bd3a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   0a7e86364e094       kube-apiserver-no-preload-178067             kube-system
	4b15ce24a3aaf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   5ce4a70de4902       etcd-no-preload-178067                       kube-system
	8cdd1b2386fc9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   9bc3df812c114       kube-scheduler-no-preload-178067             kube-system
	a8dcf65794e21       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   025ee82a1f333       kube-controller-manager-no-preload-178067    kube-system
	
	
	==> coredns [5d9a3926452fe1153e2f2a4f626a6a7edc0937440208143a2bbde7bf7330c415] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46928 - 20273 "HINFO IN 1005414316487781050.7694635695379333558. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.12614137s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-178067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-178067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-178067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_31_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:31:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-178067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:33:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:33:18 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:33:18 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:33:18 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:33:18 +0000   Wed, 19 Nov 2025 22:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-178067
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                4f7d1af3-d456-499c-ab45-67c0314eb59f
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-9dwxr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-178067                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-4rclw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-178067              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-178067     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-xll6z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-178067              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s7v5b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c59j5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-178067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-178067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-178067 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node no-preload-178067 event: Registered Node no-preload-178067 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-178067 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node no-preload-178067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node no-preload-178067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node no-preload-178067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node no-preload-178067 event: Registered Node no-preload-178067 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [4b15ce24a3aaf48f3b98e89cd8a66d0595225b1070cc8af2af5fbc40d5f34ef7] <==
	{"level":"warn","ts":"2025-11-19T22:32:47.427124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.434437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.441257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.447491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.453343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.459558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.474514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.480327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.487031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.535632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:06.246647Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.618262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.18798931978aef76\" limit:1 ","response":"range_response_count:1 size:874"}
	{"level":"warn","ts":"2025-11-19T22:33:06.246712Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.64908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-9dwxr\" limit:1 ","response":"range_response_count:1 size:5935"}
	{"level":"info","ts":"2025-11-19T22:33:06.246758Z","caller":"traceutil/trace.go:172","msg":"trace[1404333613] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-9dwxr; range_end:; response_count:1; response_revision:603; }","duration":"142.70481ms","start":"2025-11-19T22:33:06.104045Z","end":"2025-11-19T22:33:06.246750Z","steps":["trace[1404333613] 'range keys from in-memory index tree'  (duration: 142.470259ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:06.246729Z","caller":"traceutil/trace.go:172","msg":"trace[655126030] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.18798931978aef76; range_end:; response_count:1; response_revision:603; }","duration":"129.713433ms","start":"2025-11-19T22:33:06.117003Z","end":"2025-11-19T22:33:06.246717Z","steps":["trace[655126030] 'range keys from in-memory index tree'  (duration: 129.498234ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:28.974973Z","caller":"traceutil/trace.go:172","msg":"trace[1420919189] linearizableReadLoop","detail":"{readStateIndex:663; appliedIndex:663; }","duration":"117.389074ms","start":"2025-11-19T22:33:28.857550Z","end":"2025-11-19T22:33:28.974940Z","steps":["trace[1420919189] 'read index received'  (duration: 117.378774ms)","trace[1420919189] 'applied index is now lower than readState.Index'  (duration: 8.72µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:33:29.121228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.656519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T22:33:29.121303Z","caller":"traceutil/trace.go:172","msg":"trace[1351738011] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:628; }","duration":"263.740379ms","start":"2025-11-19T22:33:28.857546Z","end":"2025-11-19T22:33:29.121287Z","steps":["trace[1351738011] 'agreement among raft nodes before linearized reading'  (duration: 117.497302ms)","trace[1351738011] 'range keys from in-memory index tree'  (duration: 146.128651ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:33:29.121864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.147717ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790131555747815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.1879893197dcd9cd\" mod_revision:605 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.1879893197dcd9cd\" value_size:743 lease:4650418094700971391 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.1879893197dcd9cd\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:33:29.121978Z","caller":"traceutil/trace.go:172","msg":"trace[1860012837] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"268.880791ms","start":"2025-11-19T22:33:28.853081Z","end":"2025-11-19T22:33:29.121961Z","steps":["trace[1860012837] 'process raft request'  (duration: 122.030557ms)","trace[1860012837] 'compare'  (duration: 146.068374ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:33:29.122055Z","caller":"traceutil/trace.go:172","msg":"trace[992149599] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:663; }","duration":"146.994654ms","start":"2025-11-19T22:33:28.975042Z","end":"2025-11-19T22:33:29.122037Z","steps":["trace[992149599] 'read index received'  (duration: 24.979771ms)","trace[992149599] 'applied index is now lower than readState.Index'  (duration: 122.013178ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:33:29.122106Z","caller":"traceutil/trace.go:172","msg":"trace[269193656] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"262.431371ms","start":"2025-11-19T22:33:28.859667Z","end":"2025-11-19T22:33:29.122098Z","steps":["trace[269193656] 'process raft request'  (duration: 262.39437ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:33:29.122228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"210.24163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" limit:1 ","response":"range_response_count:1 size:842"}
	{"level":"info","ts":"2025-11-19T22:33:29.122233Z","caller":"traceutil/trace.go:172","msg":"trace[216327572] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"264.016237ms","start":"2025-11-19T22:33:28.858205Z","end":"2025-11-19T22:33:29.122221Z","steps":["trace[216327572] 'process raft request'  (duration: 263.745664ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:29.122257Z","caller":"traceutil/trace.go:172","msg":"trace[1508567573] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:632; }","duration":"210.277826ms","start":"2025-11-19T22:33:28.911971Z","end":"2025-11-19T22:33:29.122249Z","steps":["trace[1508567573] 'agreement among raft nodes before linearized reading'  (duration: 210.161953ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:29.122271Z","caller":"traceutil/trace.go:172","msg":"trace[1127934729] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"263.942181ms","start":"2025-11-19T22:33:28.858319Z","end":"2025-11-19T22:33:29.122261Z","steps":["trace[1127934729] 'process raft request'  (duration: 263.700777ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:33:45 up  1:16,  0 user,  load average: 2.27, 2.65, 1.80
	Linux no-preload-178067 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4eb1fb19b099d7480679ca495008b509002cc63b9e988d15483d29f4cffa841] <==
	I1119 22:32:49.114712       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:32:49.142703       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:32:49.142862       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:32:49.142880       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:32:49.142899       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:32:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:32:49.313596       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:32:49.313620       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:32:49.313633       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:32:49.313757       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:32:49.613708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:32:49.613730       1 metrics.go:72] Registering metrics
	I1119 22:32:49.613846       1 controller.go:711] "Syncing nftables rules"
	I1119 22:32:59.313959       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:32:59.314016       1 main.go:301] handling current node
	I1119 22:33:09.316927       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:33:09.316973       1 main.go:301] handling current node
	I1119 22:33:19.313446       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:33:19.313485       1 main.go:301] handling current node
	I1119 22:33:29.314024       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:33:29.314062       1 main.go:301] handling current node
	I1119 22:33:39.316415       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:33:39.316444       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b0da6046bd3a9d1409a02171cd110e7f7c80d13375006ef7726a6948b964a45] <==
	I1119 22:32:47.992691       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1119 22:32:47.995797       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:32:47.996625       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:32:47.998604       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:32:48.004754       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:32:48.004825       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:32:48.005733       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:32:48.005942       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 22:32:48.006020       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1119 22:32:48.006104       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:32:48.006129       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:32:48.006135       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:32:48.006145       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:32:48.026579       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:32:48.285767       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:32:48.311626       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:32:48.328570       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:32:48.334126       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:32:48.340694       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:32:48.373916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.44.57"}
	I1119 22:32:48.383256       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.13.160"}
	I1119 22:32:48.901343       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:32:51.465335       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:32:51.864723       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:32:51.914070       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a8dcf65794e2178ac75421c7fa689f31104856b8f819faab188b47806609c062] <==
	I1119 22:32:51.312280       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:32:51.312366       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-178067"
	I1119 22:32:51.312422       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:32:51.313022       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:32:51.317385       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:32:51.321642       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:32:51.361210       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:32:51.361226       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:32:51.361247       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:32:51.361261       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:32:51.361208       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 22:32:51.361329       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:32:51.361338       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:32:51.361346       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:32:51.361643       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:32:51.361977       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:32:51.362193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 22:32:51.362558       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:32:51.363666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:32:51.365567       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 22:32:51.368853       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:32:51.378106       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:32:51.380355       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:32:51.382620       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:32:51.391896       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [86197cbc9c40eb4956802a892d3451ccc5f998c8c7d732efd889058c5af9dc86] <==
	I1119 22:32:48.958251       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:32:49.032760       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:32:49.133286       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:32:49.133313       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:32:49.133414       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:32:49.150866       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:32:49.150915       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:32:49.155938       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:32:49.156378       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:32:49.156406       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:32:49.157532       1 config.go:200] "Starting service config controller"
	I1119 22:32:49.157563       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:32:49.157564       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:32:49.157589       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:32:49.157616       1 config.go:309] "Starting node config controller"
	I1119 22:32:49.157625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:32:49.157632       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:32:49.157690       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:32:49.157717       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:32:49.258714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:32:49.258752       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:32:49.258716       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8cdd1b2386fc9d6e80ae7431ec6d46c12963b7da1447247ecf7b9cd33805a53e] <==
	I1119 22:32:46.574505       1 serving.go:386] Generated self-signed cert in-memory
	I1119 22:32:47.966415       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:32:47.966437       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:32:47.971857       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 22:32:47.971887       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 22:32:47.971923       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:32:47.971932       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:32:47.971947       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:32:47.971953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:32:47.972104       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:32:47.972194       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:32:48.072542       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:32:48.072573       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 22:32:48.072608       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:32:55 no-preload-178067 kubelet[728]: I1119 22:32:55.598480     728 scope.go:117] "RemoveContainer" containerID="86f06e16e9f2d3272e29039cfc54d8e3badf0c15bc5b1d8d7ad65819a7ecd41b"
	Nov 19 22:32:56 no-preload-178067 kubelet[728]: I1119 22:32:56.603091     728 scope.go:117] "RemoveContainer" containerID="86f06e16e9f2d3272e29039cfc54d8e3badf0c15bc5b1d8d7ad65819a7ecd41b"
	Nov 19 22:32:56 no-preload-178067 kubelet[728]: I1119 22:32:56.603252     728 scope.go:117] "RemoveContainer" containerID="880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc"
	Nov 19 22:32:56 no-preload-178067 kubelet[728]: E1119 22:32:56.603490     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:32:57 no-preload-178067 kubelet[728]: I1119 22:32:57.608269     728 scope.go:117] "RemoveContainer" containerID="880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc"
	Nov 19 22:32:57 no-preload-178067 kubelet[728]: E1119 22:32:57.608441     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:32:58 no-preload-178067 kubelet[728]: I1119 22:32:58.736362     728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 22:32:59 no-preload-178067 kubelet[728]: I1119 22:32:59.623495     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c59j5" podStartSLOduration=2.05186628 podStartE2EDuration="8.623474646s" podCreationTimestamp="2025-11-19 22:32:51 +0000 UTC" firstStartedPulling="2025-11-19 22:32:52.16004209 +0000 UTC m=+6.712449093" lastFinishedPulling="2025-11-19 22:32:58.731650442 +0000 UTC m=+13.284057459" observedRunningTime="2025-11-19 22:32:59.623245751 +0000 UTC m=+14.175652776" watchObservedRunningTime="2025-11-19 22:32:59.623474646 +0000 UTC m=+14.175881669"
	Nov 19 22:33:05 no-preload-178067 kubelet[728]: I1119 22:33:05.879090     728 scope.go:117] "RemoveContainer" containerID="880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc"
	Nov 19 22:33:06 no-preload-178067 kubelet[728]: I1119 22:33:06.630677     728 scope.go:117] "RemoveContainer" containerID="880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc"
	Nov 19 22:33:06 no-preload-178067 kubelet[728]: I1119 22:33:06.630927     728 scope.go:117] "RemoveContainer" containerID="323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1"
	Nov 19 22:33:06 no-preload-178067 kubelet[728]: E1119 22:33:06.631135     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:33:15 no-preload-178067 kubelet[728]: I1119 22:33:15.878983     728 scope.go:117] "RemoveContainer" containerID="323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1"
	Nov 19 22:33:15 no-preload-178067 kubelet[728]: E1119 22:33:15.879162     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:33:19 no-preload-178067 kubelet[728]: I1119 22:33:19.665483     728 scope.go:117] "RemoveContainer" containerID="63b4d5c69223fdefa7ca853e7e38f705bdc5541b5c4cdcb98fb26b40f27b3d10"
	Nov 19 22:33:28 no-preload-178067 kubelet[728]: I1119 22:33:28.554712     728 scope.go:117] "RemoveContainer" containerID="323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1"
	Nov 19 22:33:29 no-preload-178067 kubelet[728]: I1119 22:33:29.691759     728 scope.go:117] "RemoveContainer" containerID="323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1"
	Nov 19 22:33:29 no-preload-178067 kubelet[728]: I1119 22:33:29.692034     728 scope.go:117] "RemoveContainer" containerID="1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed"
	Nov 19 22:33:29 no-preload-178067 kubelet[728]: E1119 22:33:29.692225     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:33:35 no-preload-178067 kubelet[728]: I1119 22:33:35.878795     728 scope.go:117] "RemoveContainer" containerID="1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed"
	Nov 19 22:33:35 no-preload-178067 kubelet[728]: E1119 22:33:35.878957     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:33:42 no-preload-178067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:33:42 no-preload-178067 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:33:42 no-preload-178067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:33:42 no-preload-178067 systemd[1]: kubelet.service: Consumed 1.624s CPU time.
	
	
	==> kubernetes-dashboard [1b322960c77f50cdccffcfe8abe1d997e9c28f67a27b18ffb8d0b3ecb03a0409] <==
	2025/11/19 22:32:58 Using namespace: kubernetes-dashboard
	2025/11/19 22:32:58 Using in-cluster config to connect to apiserver
	2025/11/19 22:32:58 Using secret token for csrf signing
	2025/11/19 22:32:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:32:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:32:58 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 22:32:58 Generating JWE encryption key
	2025/11/19 22:32:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:32:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:32:58 Initializing JWE encryption key from synchronized object
	2025/11/19 22:32:58 Creating in-cluster Sidecar client
	2025/11/19 22:32:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:32:58 Serving insecurely on HTTP port: 9090
	2025/11/19 22:33:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:32:58 Starting overwatch
	
	
	==> storage-provisioner [63b4d5c69223fdefa7ca853e7e38f705bdc5541b5c4cdcb98fb26b40f27b3d10] <==
	I1119 22:32:48.921627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:33:18.925436       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048] <==
	I1119 22:33:19.720181       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:33:19.728589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:33:19.728645       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:33:19.730982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:23.186003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:27.446605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:31.045289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:34.099273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:37.121473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:37.127742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:33:37.127944       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:33:37.128142       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-178067_3bcf78ed-8446-4d6c-b59a-90fe7ff8724f!
	I1119 22:33:37.128647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"410535e3-f1a2-4daf-93d0-dd88f3003fa0", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-178067_3bcf78ed-8446-4d6c-b59a-90fe7ff8724f became leader
	W1119 22:33:37.132163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:37.135156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:33:37.228721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-178067_3bcf78ed-8446-4d6c-b59a-90fe7ff8724f!
	W1119 22:33:39.138111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:39.142452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:41.146135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:41.150407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:43.154367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:43.159689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:45.163726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:45.168997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-178067 -n no-preload-178067
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-178067 -n no-preload-178067: exit status 2 (338.980036ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-178067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-178067
helpers_test.go:243: (dbg) docker inspect no-preload-178067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37",
	        "Created": "2025-11-19T22:31:25.543221838Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 247397,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:32:38.644503734Z",
	            "FinishedAt": "2025-11-19T22:32:37.760634905Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/hostname",
	        "HostsPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/hosts",
	        "LogPath": "/var/lib/docker/containers/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37/4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37-json.log",
	        "Name": "/no-preload-178067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-178067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-178067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4349f03a96054cc89975edb795cb8a58f87fc87c9623c2a68bb8557def392a37",
	                "LowerDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/178cd503a6e39d7ee32119169c14a96aa506d8f743cbd3d41514328f24c0a6f7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-178067",
	                "Source": "/var/lib/docker/volumes/no-preload-178067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-178067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-178067",
	                "name.minikube.sigs.k8s.io": "no-preload-178067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "3902cc69095f95a615fc7ef19c18587d730c38025b6ec3a50aa50e0aae990dd7",
	            "SandboxKey": "/var/run/docker/netns/3902cc69095f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-178067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2927e8174830464514428039b44b26b0e43356a4a3627c8d30f3646150dbf7f",
	                    "EndpointID": "80b5feed6a2485b52e7ce1305786570683a37f298dc094eb6424434caf03315b",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "aa:92:ea:15:6a:50",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-178067",
	                        "4349f03a9605"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-178067 -n no-preload-178067
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-178067 -n no-preload-178067: exit status 2 (334.278855ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-178067 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-178067 logs -n 25: (1.156665132s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-801704    │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ ssh     │ -p NoKubernetes-662839 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-662839          │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │                     │
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839          │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ delete  │ -p missing-upgrade-015670                                                                                                                                                                                                                     │ missing-upgrade-015670       │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p old-k8s-version-680619 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p no-preload-178067 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p no-preload-178067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p disable-driver-mounts-726490                                                                                                                                                                                                               │ disable-driver-mounts-726490 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:33:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:33:23.883705  257842 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:33:23.883983  257842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:23.883993  257842 out.go:374] Setting ErrFile to fd 2...
	I1119 22:33:23.883997  257842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:23.884187  257842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:33:23.884673  257842 out.go:368] Setting JSON to false
	I1119 22:33:23.885756  257842 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4552,"bootTime":1763587052,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:33:23.885849  257842 start.go:143] virtualization: kvm guest
	I1119 22:33:23.887726  257842 out.go:179] * [default-k8s-diff-port-409987] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:33:23.889070  257842 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:33:23.889070  257842 notify.go:221] Checking for updates...
	I1119 22:33:23.891485  257842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:33:23.892734  257842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:33:23.893909  257842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:33:23.895062  257842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:33:23.896153  257842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:33:23.897750  257842 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.897897  257842 config.go:182] Loaded profile config "kubernetes-upgrade-801704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.898024  257842 config.go:182] Loaded profile config "no-preload-178067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.898147  257842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:33:23.925695  257842 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:33:23.925842  257842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:23.983931  257842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:23.974160621 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:23.984034  257842 docker.go:319] overlay module found
	I1119 22:33:23.985686  257842 out.go:179] * Using the docker driver based on user configuration
	I1119 22:33:23.986806  257842 start.go:309] selected driver: docker
	I1119 22:33:23.986842  257842 start.go:930] validating driver "docker" against <nil>
	I1119 22:33:23.986855  257842 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:33:23.987349  257842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:24.044957  257842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:24.035470502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:24.045358  257842 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:33:24.045644  257842 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:33:24.047182  257842 out.go:179] * Using Docker driver with root privileges
	I1119 22:33:24.048300  257842 cni.go:84] Creating CNI manager for ""
	I1119 22:33:24.048398  257842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:24.048413  257842 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:33:24.048479  257842 start.go:353] cluster config:
	{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:24.049668  257842 out.go:179] * Starting "default-k8s-diff-port-409987" primary control-plane node in "default-k8s-diff-port-409987" cluster
	I1119 22:33:24.050617  257842 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:33:24.051685  257842 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:33:24.052672  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:24.052710  257842 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:33:24.052717  257842 cache.go:65] Caching tarball of preloaded images
	I1119 22:33:24.052766  257842 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:33:24.052856  257842 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:33:24.052873  257842 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:33:24.052980  257842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:33:24.053013  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json: {Name:mkd16b9878826f2245b2c07a772bd12235442172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:24.072676  257842 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:33:24.072691  257842 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:33:24.072705  257842 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:33:24.072727  257842 start.go:360] acquireMachinesLock for default-k8s-diff-port-409987: {Name:mk3691865877e78ad0fe52d2c0e71ee1c1c3699a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:33:24.072831  257842 start.go:364] duration metric: took 71.579µs to acquireMachinesLock for "default-k8s-diff-port-409987"
	I1119 22:33:24.072860  257842 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:33:24.072935  257842 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:33:21.846845  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:22.347017  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:22.847034  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:23.346898  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:23.846436  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:24.346943  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:24.846671  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:25.346975  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:25.420030  252325 kubeadm.go:1114] duration metric: took 4.651844422s to wait for elevateKubeSystemPrivileges
	I1119 22:33:25.420066  252325 kubeadm.go:403] duration metric: took 14.384664171s to StartCluster
	I1119 22:33:25.420088  252325 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:25.420154  252325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:33:25.422122  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:25.422376  252325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:33:25.422394  252325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:33:25.422458  252325 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:33:25.422555  252325 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-443380"
	I1119 22:33:25.422587  252325 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-443380"
	I1119 22:33:25.422585  252325 addons.go:70] Setting default-storageclass=true in profile "embed-certs-443380"
	I1119 22:33:25.422605  252325 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:25.422616  252325 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-443380"
	I1119 22:33:25.422620  252325 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:33:25.423009  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.423154  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.425572  252325 out.go:179] * Verifying Kubernetes components...
	I1119 22:33:25.427178  252325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:25.446337  252325 addons.go:239] Setting addon default-storageclass=true in "embed-certs-443380"
	I1119 22:33:25.446384  252325 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:33:25.446890  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.448940  252325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:33:25.450228  252325 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:33:25.450251  252325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:33:25.450306  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:25.480574  252325 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:33:25.480600  252325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:33:25.480661  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:25.481387  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:25.506078  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:25.523359  252325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:33:25.586976  252325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:33:25.611710  252325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:33:25.635667  252325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:33:25.747803  252325 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:33:25.750001  252325 node_ready.go:35] waiting up to 6m0s for node "embed-certs-443380" to be "Ready" ...
	I1119 22:33:25.969838  252325 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:33:25.970910  252325 addons.go:515] duration metric: took 548.451841ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:33:26.253634  252325 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-443380" context rescaled to 1 replicas
	I1119 22:33:22.382769  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:22.383154  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:22.383202  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:22.383251  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:22.412635  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:22.412654  229026 cri.go:89] found id: ""
	I1119 22:33:22.412662  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:22.412702  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.416473  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:22.416531  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:22.442074  229026 cri.go:89] found id: ""
	I1119 22:33:22.442093  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.442100  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:22.442105  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:22.442152  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:22.467611  229026 cri.go:89] found id: ""
	I1119 22:33:22.467633  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.467641  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:22.467648  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:22.467703  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:22.494154  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:22.494172  229026 cri.go:89] found id: ""
	I1119 22:33:22.494180  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:22.494229  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.497892  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:22.497950  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:22.523686  229026 cri.go:89] found id: ""
	I1119 22:33:22.523711  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.523720  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:22.523729  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:22.523785  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:22.549770  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:22.549794  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:22.549799  229026 cri.go:89] found id: ""
	I1119 22:33:22.549810  229026 logs.go:282] 2 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:33:22.549889  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.554433  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.558149  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:22.558194  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:22.594272  229026 cri.go:89] found id: ""
	I1119 22:33:22.594299  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.594309  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:22.594317  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:22.594359  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:22.625976  229026 cri.go:89] found id: ""
	I1119 22:33:22.626001  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.626012  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:22.626027  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:22.626038  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:22.660094  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:22.660123  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:22.676931  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:22.676957  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:22.733420  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:22.733439  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:22.733450  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:22.765920  229026 logs.go:123] Gathering logs for kube-controller-manager [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a] ...
	I1119 22:33:22.765952  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:22.791770  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:22.791795  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:22.832968  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:22.832994  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:22.920507  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:22.920540  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:22.985203  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:22.985241  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.512901  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:25.514058  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:25.514118  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:25.514214  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:25.556844  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:25.556876  229026 cri.go:89] found id: ""
	I1119 22:33:25.556887  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:25.556952  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.562892  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:25.562953  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:25.605067  229026 cri.go:89] found id: ""
	I1119 22:33:25.605124  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.605136  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:25.605145  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:25.605204  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:25.644356  229026 cri.go:89] found id: ""
	I1119 22:33:25.644385  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.644395  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:25.644403  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:25.644460  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:25.683152  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:25.683178  229026 cri.go:89] found id: ""
	I1119 22:33:25.683273  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:25.683342  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.688089  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:25.688208  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:25.725026  229026 cri.go:89] found id: ""
	I1119 22:33:25.725056  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.725065  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:25.725073  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:25.725244  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:25.761160  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.761204  229026 cri.go:89] found id: ""
	I1119 22:33:25.761216  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:25.761282  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.766966  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:25.767028  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:25.804510  229026 cri.go:89] found id: ""
	I1119 22:33:25.804540  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.804551  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:25.804559  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:25.804622  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:25.837652  229026 cri.go:89] found id: ""
	I1119 22:33:25.837679  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.837701  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:25.837712  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:25.837726  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:25.892405  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:25.892441  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.927183  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:25.927223  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:25.982585  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:25.982613  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:26.013887  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:26.013923  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:26.098577  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:26.098611  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:26.115217  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:26.115244  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:26.178958  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:26.178984  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:26.179005  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	W1119 22:33:23.608027  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:25.612411  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:28.107283  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	I1119 22:33:24.074503  257842 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:33:24.074696  257842 start.go:159] libmachine.API.Create for "default-k8s-diff-port-409987" (driver="docker")
	I1119 22:33:24.074724  257842 client.go:173] LocalClient.Create starting
	I1119 22:33:24.074791  257842 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem
	I1119 22:33:24.074871  257842 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:24.074891  257842 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:24.074944  257842 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem
	I1119 22:33:24.074966  257842 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:24.074977  257842 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:24.075254  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:33:24.091285  257842 cli_runner.go:211] docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:33:24.091350  257842 network_create.go:284] running [docker network inspect default-k8s-diff-port-409987] to gather additional debugging logs...
	I1119 22:33:24.091365  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987
	W1119 22:33:24.108545  257842 cli_runner.go:211] docker network inspect default-k8s-diff-port-409987 returned with exit code 1
	I1119 22:33:24.108572  257842 network_create.go:287] error running [docker network inspect default-k8s-diff-port-409987]: docker network inspect default-k8s-diff-port-409987: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-409987 not found
	I1119 22:33:24.108587  257842 network_create.go:289] output of [docker network inspect default-k8s-diff-port-409987]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-409987 not found
	
	** /stderr **
	I1119 22:33:24.108708  257842 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:24.125616  257842 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cde0f356bd10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b5:fa:ba:e0:a6} reservation:<nil>}
	I1119 22:33:24.126341  257842 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-47fb5ce24a02 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:30:91:0e:d6:d9} reservation:<nil>}
	I1119 22:33:24.127005  257842 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2592199ffac9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:9b:dd:65:07:28} reservation:<nil>}
	I1119 22:33:24.127748  257842 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f40680}
	I1119 22:33:24.127768  257842 network_create.go:124] attempt to create docker network default-k8s-diff-port-409987 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 22:33:24.127824  257842 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 default-k8s-diff-port-409987
	I1119 22:33:24.174801  257842 network_create.go:108] docker network default-k8s-diff-port-409987 192.168.76.0/24 created
	I1119 22:33:24.174930  257842 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-409987" container
	I1119 22:33:24.174986  257842 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:33:24.193121  257842 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-409987 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:33:24.209597  257842 oci.go:103] Successfully created a docker volume default-k8s-diff-port-409987
	I1119 22:33:24.209672  257842 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-409987-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --entrypoint /usr/bin/test -v default-k8s-diff-port-409987:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:33:24.605177  257842 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-409987
	I1119 22:33:24.605252  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:24.605267  257842 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:33:24.605340  257842 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-409987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:33:29.133052  247081 pod_ready.go:94] pod "coredns-66bc5c9577-9dwxr" is "Ready"
	I1119 22:33:29.133080  247081 pod_ready.go:86] duration metric: took 39.530851945s for pod "coredns-66bc5c9577-9dwxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.138098  247081 pod_ready.go:83] waiting for pod "etcd-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.142937  247081 pod_ready.go:94] pod "etcd-no-preload-178067" is "Ready"
	I1119 22:33:29.142962  247081 pod_ready.go:86] duration metric: took 4.839499ms for pod "etcd-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.238949  247081 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.244009  247081 pod_ready.go:94] pod "kube-apiserver-no-preload-178067" is "Ready"
	I1119 22:33:29.244037  247081 pod_ready.go:86] duration metric: took 5.06142ms for pod "kube-apiserver-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.246567  247081 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.305183  247081 pod_ready.go:94] pod "kube-controller-manager-no-preload-178067" is "Ready"
	I1119 22:33:29.305208  247081 pod_ready.go:86] duration metric: took 58.619262ms for pod "kube-controller-manager-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.504991  247081 pod_ready.go:83] waiting for pod "kube-proxy-xll6z" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.905540  247081 pod_ready.go:94] pod "kube-proxy-xll6z" is "Ready"
	I1119 22:33:29.905566  247081 pod_ready.go:86] duration metric: took 400.551202ms for pod "kube-proxy-xll6z" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.105246  247081 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.505433  247081 pod_ready.go:94] pod "kube-scheduler-no-preload-178067" is "Ready"
	I1119 22:33:30.505459  247081 pod_ready.go:86] duration metric: took 400.188275ms for pod "kube-scheduler-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.505470  247081 pod_ready.go:40] duration metric: took 40.906421291s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:30.547626  247081 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:33:30.549623  247081 out.go:179] * Done! kubectl is now configured to use "no-preload-178067" cluster and "default" namespace by default
	W1119 22:33:27.844382  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	W1119 22:33:30.253097  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	I1119 22:33:28.710624  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:28.711065  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:28.711113  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:28.711160  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:28.736722  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:28.736744  229026 cri.go:89] found id: ""
	I1119 22:33:28.736752  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:28.736803  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.741111  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:28.741177  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:28.766295  229026 cri.go:89] found id: ""
	I1119 22:33:28.766319  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.766327  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:28.766333  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:28.766378  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:28.791972  229026 cri.go:89] found id: ""
	I1119 22:33:28.791994  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.792001  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:28.792006  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:28.792056  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:28.818307  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:28.818327  229026 cri.go:89] found id: ""
	I1119 22:33:28.818335  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:28.818394  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.822683  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:28.822764  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:28.856448  229026 cri.go:89] found id: ""
	I1119 22:33:28.856499  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.856510  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:28.856518  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:28.856580  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:28.882557  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:28.882584  229026 cri.go:89] found id: ""
	I1119 22:33:28.882592  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:28.882645  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.886479  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:28.886545  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:28.912563  229026 cri.go:89] found id: ""
	I1119 22:33:28.912588  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.912595  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:28.912601  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:28.912644  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:28.937277  229026 cri.go:89] found id: ""
	I1119 22:33:28.937299  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.937306  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:28.937315  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:28.937326  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:28.966343  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:28.966368  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:29.014708  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:29.014743  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:29.040387  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:29.040411  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:29.082359  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:29.082390  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:29.111167  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:29.111194  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:29.215828  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:29.215865  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:29.230491  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:29.230519  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:29.295659  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:29.158194  257842 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-409987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.55281447s)
	I1119 22:33:29.158220  257842 kic.go:203] duration metric: took 4.552950236s to extract preloaded images to volume ...
	W1119 22:33:29.158286  257842 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:33:29.158312  257842 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:33:29.158344  257842 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:33:29.217611  257842 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-409987 --name default-k8s-diff-port-409987 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --network default-k8s-diff-port-409987 --ip 192.168.76.2 --volume default-k8s-diff-port-409987:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:33:29.532541  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Running}}
	I1119 22:33:29.551244  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.569223  257842 cli_runner.go:164] Run: docker exec default-k8s-diff-port-409987 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:33:29.614972  257842 oci.go:144] the created container "default-k8s-diff-port-409987" has a running status.
	I1119 22:33:29.614999  257842 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa...
	I1119 22:33:29.811803  257842 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:33:29.835714  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.852802  257842 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:33:29.852845  257842 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-409987 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:33:29.895797  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.913061  257842 machine.go:94] provisionDockerMachine start ...
	I1119 22:33:29.913137  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:29.929995  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:29.930308  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:29.930328  257842 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:33:29.931145  257842 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54386->127.0.0.1:33078: read: connection reset by peer
	I1119 22:33:33.055705  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:33:33.055755  257842 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-409987"
	I1119 22:33:33.055830  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.073640  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.073912  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.073935  257842 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-409987 && echo "default-k8s-diff-port-409987" | sudo tee /etc/hostname
	I1119 22:33:33.206352  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:33:33.206423  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.224632  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.224930  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.224968  257842 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-409987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-409987/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-409987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:33:33.347746  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:33:33.347776  257842 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:33:33.347844  257842 ubuntu.go:190] setting up certificates
	I1119 22:33:33.347867  257842 provision.go:84] configureAuth start
	I1119 22:33:33.347925  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:33.365007  257842 provision.go:143] copyHostCerts
	I1119 22:33:33.365064  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:33:33.365077  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:33:33.365153  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:33:33.365253  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:33:33.365265  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:33:33.365299  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:33:33.365384  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:33:33.365393  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:33:33.365439  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:33:33.365514  257842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-409987 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-409987 localhost minikube]
	I1119 22:33:33.469295  257842 provision.go:177] copyRemoteCerts
	I1119 22:33:33.469350  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:33:33.469399  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.487180  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:33.579229  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:33:33.598170  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:33:33.615332  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:33:33.631707  257842 provision.go:87] duration metric: took 283.825271ms to configureAuth
	I1119 22:33:33.631738  257842 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:33:33.631927  257842 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:33.632038  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.649525  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.649754  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.649776  257842 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:33:33.911864  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:33:33.911899  257842 machine.go:97] duration metric: took 3.998818366s to provisionDockerMachine
	I1119 22:33:33.911921  257842 client.go:176] duration metric: took 9.837189219s to LocalClient.Create
	I1119 22:33:33.911944  257842 start.go:167] duration metric: took 9.837246112s to libmachine.API.Create "default-k8s-diff-port-409987"
	I1119 22:33:33.911958  257842 start.go:293] postStartSetup for "default-k8s-diff-port-409987" (driver="docker")
	I1119 22:33:33.911972  257842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:33:33.912049  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:33:33.912100  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.930567  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.023978  257842 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:33:34.027239  257842 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:33:34.027262  257842 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:33:34.027271  257842 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:33:34.027334  257842 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:33:34.027439  257842 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:33:34.027574  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:33:34.034703  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:34.053030  257842 start.go:296] duration metric: took 141.059047ms for postStartSetup
	I1119 22:33:34.053328  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:34.071401  257842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:33:34.071655  257842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:33:34.071702  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.089393  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.179354  257842 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:33:34.184028  257842 start.go:128] duration metric: took 10.111081087s to createHost
	I1119 22:33:34.184050  257842 start.go:83] releasing machines lock for "default-k8s-diff-port-409987", held for 10.111205257s
	I1119 22:33:34.184110  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:34.201570  257842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:33:34.201588  257842 ssh_runner.go:195] Run: cat /version.json
	I1119 22:33:34.201638  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.201643  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.219778  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.220185  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.307356  257842 ssh_runner.go:195] Run: systemctl --version
	I1119 22:33:34.377088  257842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:33:34.409301  257842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:33:34.413564  257842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:33:34.413625  257842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:33:34.439025  257842 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:33:34.439049  257842 start.go:496] detecting cgroup driver to use...
	I1119 22:33:34.439080  257842 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:33:34.439115  257842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:33:34.453624  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:33:34.464939  257842 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:33:34.464985  257842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:33:34.480085  257842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:33:34.496141  257842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:33:34.577139  257842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:33:34.661485  257842 docker.go:234] disabling docker service ...
	I1119 22:33:34.661548  257842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:33:34.680544  257842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:33:34.693829  257842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:33:34.778614  257842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:33:34.863617  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:33:34.876075  257842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:33:34.890553  257842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:33:34.890610  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.901356  257842 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:33:34.901423  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.910601  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.920150  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.929306  257842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:33:34.937318  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.946730  257842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.960309  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.968769  257842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:33:34.977040  257842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:33:34.984350  257842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:35.075418  257842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:33:35.218176  257842 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:33:35.218239  257842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:33:35.221998  257842 start.go:564] Will wait 60s for crictl version
	I1119 22:33:35.222046  257842 ssh_runner.go:195] Run: which crictl
	I1119 22:33:35.225560  257842 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:33:35.248793  257842 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:33:35.248876  257842 ssh_runner.go:195] Run: crio --version
	I1119 22:33:35.277023  257842 ssh_runner.go:195] Run: crio --version
	I1119 22:33:35.307857  257842 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1119 22:33:32.253373  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	W1119 22:33:34.754649  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	I1119 22:33:31.796780  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:31.797236  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:31.797296  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:31.797357  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:31.822313  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:31.822330  229026 cri.go:89] found id: ""
	I1119 22:33:31.822337  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:31.822381  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.825911  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:31.825967  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:31.851805  229026 cri.go:89] found id: ""
	I1119 22:33:31.851852  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.851859  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:31.851864  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:31.851918  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:31.877079  229026 cri.go:89] found id: ""
	I1119 22:33:31.877100  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.877107  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:31.877113  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:31.877160  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:31.901847  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:31.901864  229026 cri.go:89] found id: ""
	I1119 22:33:31.901871  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:31.901909  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.906013  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:31.906067  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:31.930107  229026 cri.go:89] found id: ""
	I1119 22:33:31.930128  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.930137  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:31.930144  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:31.930183  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:31.954253  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:31.954272  229026 cri.go:89] found id: ""
	I1119 22:33:31.954291  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:31.954347  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.957894  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:31.957950  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:31.982148  229026 cri.go:89] found id: ""
	I1119 22:33:31.982171  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.982181  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:31.982187  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:31.982232  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:32.010776  229026 cri.go:89] found id: ""
	I1119 22:33:32.010801  229026 logs.go:282] 0 containers: []
	W1119 22:33:32.010809  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:32.010835  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:32.010850  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:32.036144  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:32.036167  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:32.078660  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:32.078684  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:32.106831  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:32.106857  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:32.189849  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:32.189874  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:32.203302  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:32.203326  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:32.257080  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:32.257098  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:32.257112  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:32.289358  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:32.289436  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:34.836503  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:34.836865  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:34.836919  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:34.836974  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:34.864697  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:34.864716  229026 cri.go:89] found id: ""
	I1119 22:33:34.864726  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:34.864788  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:34.868370  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:34.868423  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:34.894465  229026 cri.go:89] found id: ""
	I1119 22:33:34.894487  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.894498  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:34.894505  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:34.894555  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:34.922777  229026 cri.go:89] found id: ""
	I1119 22:33:34.922798  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.922810  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:34.922835  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:34.922886  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:34.949441  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:34.949462  229026 cri.go:89] found id: ""
	I1119 22:33:34.949471  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:34.949515  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:34.952986  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:34.953034  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:34.978855  229026 cri.go:89] found id: ""
	I1119 22:33:34.978885  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.978896  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:34.978905  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:34.978956  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:35.004626  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:35.004650  229026 cri.go:89] found id: ""
	I1119 22:33:35.004658  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:35.004709  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:35.008905  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:35.008961  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:35.039110  229026 cri.go:89] found id: ""
	I1119 22:33:35.039132  229026 logs.go:282] 0 containers: []
	W1119 22:33:35.039141  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:35.039149  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:35.039202  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:35.065661  229026 cri.go:89] found id: ""
	I1119 22:33:35.065694  229026 logs.go:282] 0 containers: []
	W1119 22:33:35.065705  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:35.065719  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:35.065741  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:35.095020  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:35.095050  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:35.143773  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:35.143802  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:35.174044  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:35.174078  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:35.265375  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:35.265400  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:35.280716  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:35.280744  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:35.339887  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:35.339905  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:35.339919  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:35.375008  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:35.375028  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:35.308950  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:35.327275  257842 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:33:35.331352  257842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:35.342840  257842 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:33:35.343008  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:35.343065  257842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:35.374136  257842 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:35.374157  257842 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:33:35.374203  257842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:35.399179  257842 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:35.399198  257842 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:33:35.399205  257842 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1119 22:33:35.399280  257842 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-409987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:33:35.399339  257842 ssh_runner.go:195] Run: crio config
	I1119 22:33:35.444494  257842 cni.go:84] Creating CNI manager for ""
	I1119 22:33:35.444513  257842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:35.444528  257842 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:33:35.444547  257842 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-409987 NodeName:default-k8s-diff-port-409987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:33:35.444673  257842 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-409987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:33:35.444731  257842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:33:35.452420  257842 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:33:35.452477  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:33:35.459942  257842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 22:33:35.471786  257842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:33:35.486354  257842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 22:33:35.497770  257842 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:33:35.501361  257842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:35.510565  257842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:35.589911  257842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:33:35.612829  257842 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987 for IP: 192.168.76.2
	I1119 22:33:35.612849  257842 certs.go:195] generating shared ca certs ...
	I1119 22:33:35.612868  257842 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:35.613005  257842 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:33:35.613069  257842 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:33:35.613084  257842 certs.go:257] generating profile certs ...
	I1119 22:33:35.613150  257842 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key
	I1119 22:33:35.613176  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt with IP's: []
	I1119 22:33:36.259839  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt ...
	I1119 22:33:36.259864  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt: {Name:mk51645faa5989875e782e359a15271baba6c64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.260055  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key ...
	I1119 22:33:36.260072  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key: {Name:mkcbdf4025b10d73f6acb70bea0cad4aaaa9a2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.260192  257842 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832
	I1119 22:33:36.260218  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:33:36.935157  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 ...
	I1119 22:33:36.935185  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832: {Name:mka229d41a2be07fe6a31ff8c42ef5ff6a82a36c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.935348  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832 ...
	I1119 22:33:36.935366  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832: {Name:mk46e2ff9da97b96045d25f2b413ce78625d779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.935473  257842 certs.go:382] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt
	I1119 22:33:36.935578  257842 certs.go:386] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key
	I1119 22:33:36.935666  257842 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key
	I1119 22:33:36.935689  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt with IP's: []
	I1119 22:33:37.249125  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt ...
	I1119 22:33:37.249156  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt: {Name:mkea403caf60bc3ff91af8eead4c159ce9fb0ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:37.249328  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key ...
	I1119 22:33:37.249343  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key: {Name:mk241b53e3e9b76398e3ef0e5e4da30803b4e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:37.249518  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:33:37.249551  257842 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:33:37.249561  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:33:37.249581  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:33:37.249602  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:33:37.249623  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:33:37.249663  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:37.250283  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:33:37.269810  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:33:37.288148  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:33:37.304313  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:33:37.320191  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:33:37.336074  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:33:37.352096  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:33:37.367943  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:33:37.383751  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:33:37.401518  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:33:37.417219  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:33:37.433104  257842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:33:37.444411  257842 ssh_runner.go:195] Run: openssl version
	I1119 22:33:37.449967  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:33:37.457635  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.460958  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.461009  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.495263  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:33:37.503611  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:33:37.511484  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.515181  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.515229  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.549064  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:33:37.557207  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:33:37.565171  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.568456  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.568497  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.602554  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:33:37.610380  257842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:33:37.613736  257842 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:33:37.613789  257842 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:37.613885  257842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:33:37.613954  257842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:33:37.640740  257842 cri.go:89] found id: ""
	I1119 22:33:37.640811  257842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:33:37.648414  257842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:33:37.655868  257842 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:33:37.655906  257842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:33:37.662990  257842 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:33:37.663005  257842 kubeadm.go:158] found existing configuration files:
	
	I1119 22:33:37.663036  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:33:37.670124  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:33:37.670173  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:33:37.677380  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:33:37.686619  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:33:37.686669  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:33:37.693439  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:33:37.700485  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:33:37.700520  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:33:37.707378  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:33:37.714951  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:33:37.714984  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:33:37.721780  257842 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:33:37.759244  257842 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:33:37.759294  257842 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:33:37.786995  257842 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:33:37.787082  257842 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:33:37.787129  257842 kubeadm.go:319] OS: Linux
	I1119 22:33:37.787187  257842 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:33:37.787260  257842 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:33:37.787357  257842 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:33:37.787443  257842 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:33:37.787529  257842 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:33:37.787609  257842 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:33:37.787686  257842 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:33:37.787779  257842 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:33:37.851453  257842 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:33:37.851600  257842 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:33:37.851724  257842 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:33:37.860973  257842 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:33:37.862911  257842 out.go:252]   - Generating certificates and keys ...
	I1119 22:33:37.863031  257842 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:33:37.863132  257842 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:33:37.987676  257842 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:33:38.117107  257842 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:33:38.304291  257842 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:33:38.419481  257842 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:33:38.673629  257842 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:33:38.673787  257842 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-409987 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:33:38.716286  257842 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:33:38.716448  257842 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-409987 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:33:38.841539  257842 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:33:37.253123  252325 node_ready.go:49] node "embed-certs-443380" is "Ready"
	I1119 22:33:37.253146  252325 node_ready.go:38] duration metric: took 11.503113839s for node "embed-certs-443380" to be "Ready" ...
	I1119 22:33:37.253158  252325 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:33:37.253193  252325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:33:37.264416  252325 api_server.go:72] duration metric: took 11.841983624s to wait for apiserver process to appear ...
	I1119 22:33:37.264435  252325 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:33:37.264448  252325 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:33:37.269949  252325 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:33:37.270720  252325 api_server.go:141] control plane version: v1.34.1
	I1119 22:33:37.270741  252325 api_server.go:131] duration metric: took 6.29992ms to wait for apiserver health ...
	I1119 22:33:37.270748  252325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:33:37.273648  252325 system_pods.go:59] 8 kube-system pods found
	I1119 22:33:37.273681  252325 system_pods.go:61] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.273687  252325 system_pods.go:61] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.273692  252325 system_pods.go:61] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.273695  252325 system_pods.go:61] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.273699  252325 system_pods.go:61] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.273702  252325 system_pods.go:61] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.273705  252325 system_pods.go:61] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.273710  252325 system_pods.go:61] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.273719  252325 system_pods.go:74] duration metric: took 2.966347ms to wait for pod list to return data ...
	I1119 22:33:37.273726  252325 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:33:37.275697  252325 default_sa.go:45] found service account: "default"
	I1119 22:33:37.275714  252325 default_sa.go:55] duration metric: took 1.983922ms for default service account to be created ...
	I1119 22:33:37.275722  252325 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:33:37.278323  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.278347  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.278357  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.278362  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.278366  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.278370  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.278373  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.278376  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.278380  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.278397  252325 retry.go:31] will retry after 216.008228ms: missing components: kube-dns
	I1119 22:33:37.498308  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.498341  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.498349  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.498359  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.498366  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.498373  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.498379  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.498384  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.498396  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.498412  252325 retry.go:31] will retry after 271.433631ms: missing components: kube-dns
	I1119 22:33:37.773981  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.774011  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.774024  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.774029  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.774033  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.774037  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.774040  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.774043  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.774048  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.774061  252325 retry.go:31] will retry after 422.422645ms: missing components: kube-dns
	I1119 22:33:38.201323  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:38.201351  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Running
	I1119 22:33:38.201358  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:38.201364  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:38.201370  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:38.201377  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:38.201382  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:38.201387  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:38.201392  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Running
	I1119 22:33:38.201410  252325 system_pods.go:126] duration metric: took 925.672892ms to wait for k8s-apps to be running ...
	I1119 22:33:38.201420  252325 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:33:38.201470  252325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:38.215425  252325 system_svc.go:56] duration metric: took 13.999039ms WaitForService to wait for kubelet
	I1119 22:33:38.215452  252325 kubeadm.go:587] duration metric: took 12.793019797s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:33:38.215473  252325 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:33:38.218207  252325 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:33:38.218230  252325 node_conditions.go:123] node cpu capacity is 8
	I1119 22:33:38.218241  252325 node_conditions.go:105] duration metric: took 2.763018ms to run NodePressure ...
	I1119 22:33:38.218255  252325 start.go:242] waiting for startup goroutines ...
	I1119 22:33:38.218268  252325 start.go:247] waiting for cluster config update ...
	I1119 22:33:38.218285  252325 start.go:256] writing updated cluster config ...
	I1119 22:33:38.218604  252325 ssh_runner.go:195] Run: rm -f paused
	I1119 22:33:38.222676  252325 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:38.226257  252325 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jmjmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.230497  252325 pod_ready.go:94] pod "coredns-66bc5c9577-jmjmf" is "Ready"
	I1119 22:33:38.230536  252325 pod_ready.go:86] duration metric: took 4.244524ms for pod "coredns-66bc5c9577-jmjmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.232661  252325 pod_ready.go:83] waiting for pod "etcd-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.236397  252325 pod_ready.go:94] pod "etcd-embed-certs-443380" is "Ready"
	I1119 22:33:38.236416  252325 pod_ready.go:86] duration metric: took 3.737265ms for pod "etcd-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.238310  252325 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.242981  252325 pod_ready.go:94] pod "kube-apiserver-embed-certs-443380" is "Ready"
	I1119 22:33:38.242999  252325 pod_ready.go:86] duration metric: took 4.670826ms for pod "kube-apiserver-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.244923  252325 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.627463  252325 pod_ready.go:94] pod "kube-controller-manager-embed-certs-443380" is "Ready"
	I1119 22:33:38.627488  252325 pod_ready.go:86] duration metric: took 382.549793ms for pod "kube-controller-manager-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.827739  252325 pod_ready.go:83] waiting for pod "kube-proxy-r5xtg" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.226142  252325 pod_ready.go:94] pod "kube-proxy-r5xtg" is "Ready"
	I1119 22:33:39.226169  252325 pod_ready.go:86] duration metric: took 398.408001ms for pod "kube-proxy-r5xtg" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.427580  252325 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.827781  252325 pod_ready.go:94] pod "kube-scheduler-embed-certs-443380" is "Ready"
	I1119 22:33:39.827836  252325 pod_ready.go:86] duration metric: took 400.201717ms for pod "kube-scheduler-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.827853  252325 pod_ready.go:40] duration metric: took 1.605146507s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:39.871483  252325 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:33:39.873526  252325 out.go:179] * Done! kubectl is now configured to use "embed-certs-443380" cluster and "default" namespace by default
	I1119 22:33:39.323985  257842 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:33:39.442549  257842 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:33:39.442737  257842 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:33:39.627688  257842 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:33:40.036493  257842 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:33:40.698146  257842 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:33:40.961731  257842 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:33:41.149359  257842 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:33:41.150288  257842 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:33:41.154317  257842 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:33:37.929115  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:37.929464  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:37.929520  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:37.929565  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:37.955356  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:37.955379  229026 cri.go:89] found id: ""
	I1119 22:33:37.955388  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:37.955438  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:37.959319  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:37.959393  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:37.984434  229026 cri.go:89] found id: ""
	I1119 22:33:37.984458  229026 logs.go:282] 0 containers: []
	W1119 22:33:37.984468  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:37.984475  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:37.984526  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:38.012164  229026 cri.go:89] found id: ""
	I1119 22:33:38.012190  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.012199  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:38.012204  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:38.012285  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:38.036173  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:38.036195  229026 cri.go:89] found id: ""
	I1119 22:33:38.036205  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:38.036257  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:38.039850  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:38.039898  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:38.064432  229026 cri.go:89] found id: ""
	I1119 22:33:38.064452  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.064461  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:38.064467  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:38.064514  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:38.090526  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:38.090548  229026 cri.go:89] found id: ""
	I1119 22:33:38.090557  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:38.090607  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:38.094245  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:38.094302  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:38.121462  229026 cri.go:89] found id: ""
	I1119 22:33:38.121481  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.121491  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:38.121498  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:38.121549  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:38.146752  229026 cri.go:89] found id: ""
	I1119 22:33:38.146772  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.146778  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:38.146787  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:38.146796  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:38.196010  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:38.196033  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:38.223390  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:38.223411  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:38.270213  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:38.270241  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:38.299662  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:38.299691  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:38.386912  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:38.386944  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:38.400305  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:38.400339  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:38.455714  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:38.455731  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:38.455743  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:40.987565  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:40.987943  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:40.987996  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:40.988049  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:41.016569  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:41.016586  229026 cri.go:89] found id: ""
	I1119 22:33:41.016593  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:41.016633  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.020316  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:41.020366  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:41.046437  229026 cri.go:89] found id: ""
	I1119 22:33:41.046457  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.046463  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:41.046468  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:41.046529  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:41.072677  229026 cri.go:89] found id: ""
	I1119 22:33:41.072701  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.072711  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:41.072719  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:41.072769  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:41.099927  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:41.099949  229026 cri.go:89] found id: ""
	I1119 22:33:41.099959  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:41.100014  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.104773  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:41.104852  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:41.139008  229026 cri.go:89] found id: ""
	I1119 22:33:41.139034  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.139043  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:41.139051  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:41.139109  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:41.170661  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:41.170688  229026 cri.go:89] found id: ""
	I1119 22:33:41.170706  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:41.170763  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.174802  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:41.174872  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:41.209289  229026 cri.go:89] found id: ""
	I1119 22:33:41.209313  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.209323  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:41.209330  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:41.209383  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:41.248091  229026 cri.go:89] found id: ""
	I1119 22:33:41.248112  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.248119  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:41.248128  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:41.248139  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:41.341775  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:41.341806  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:41.355629  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:41.355651  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:41.412102  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:41.412120  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:41.412132  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:41.440857  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:41.440882  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:41.488518  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:41.488550  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:41.514120  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:41.514142  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:41.156021  257842 out.go:252]   - Booting up control plane ...
	I1119 22:33:41.156147  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:33:41.156248  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:33:41.157952  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:33:41.175709  257842 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:33:41.175884  257842 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:33:41.184676  257842 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:33:41.185076  257842 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:33:41.185168  257842 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:33:41.291554  257842 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:33:41.291689  257842 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:33:41.793221  257842 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.701541ms
	I1119 22:33:41.796084  257842 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:33:41.796211  257842 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1119 22:33:41.796352  257842 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:33:41.796490  257842 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:33:43.308442  257842 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.512357875s
	I1119 22:33:44.044905  257842 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.248841505s
	I1119 22:33:45.797611  257842 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001570497s
	I1119 22:33:45.808645  257842 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:33:45.819335  257842 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:33:45.830686  257842 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:33:45.830987  257842 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-409987 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:33:45.839041  257842 kubeadm.go:319] [bootstrap-token] Using token: o014qj.gxcv4zxy9pcntvf3
	I1119 22:33:41.559717  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:41.559743  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:44.099962  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:44.100339  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:44.100389  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:44.100433  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:44.128669  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:44.128702  229026 cri.go:89] found id: ""
	I1119 22:33:44.128712  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:44.128764  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:44.132578  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:44.132631  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:44.158942  229026 cri.go:89] found id: ""
	I1119 22:33:44.158963  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.158972  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:44.158978  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:44.159035  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:44.185157  229026 cri.go:89] found id: ""
	I1119 22:33:44.185180  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.185187  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:44.185193  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:44.185237  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:44.225356  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:44.225380  229026 cri.go:89] found id: ""
	I1119 22:33:44.225389  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:44.225443  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:44.231443  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:44.231519  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:44.269502  229026 cri.go:89] found id: ""
	I1119 22:33:44.269629  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.269686  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:44.269703  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:44.269775  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:44.298843  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:44.298867  229026 cri.go:89] found id: ""
	I1119 22:33:44.298876  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:44.298937  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:44.302692  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:44.302747  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:44.329142  229026 cri.go:89] found id: ""
	I1119 22:33:44.329168  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.329179  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:44.329186  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:44.329242  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:44.354082  229026 cri.go:89] found id: ""
	I1119 22:33:44.354108  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.354118  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:44.354128  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:44.354144  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:44.384805  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:44.384860  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:44.481636  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:44.481663  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:44.498395  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:44.498441  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:44.558873  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:44.558897  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:44.558912  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:44.595563  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:44.595588  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:44.647710  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:44.647740  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:44.672338  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:44.672362  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Nov 19 22:33:06 no-preload-178067 crio[580]: time="2025-11-19T22:33:06.118060029Z" level=info msg="Started container" PID=1790 containerID=323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper id=3b85e31c-0462-4d80-99c3-99a9d3c6e25a name=/runtime.v1.RuntimeService/StartContainer sandboxID=90007c75ed69c8a90e6e3581234ee52a9e54f5b1fc947d2e0799377a0886fdd6
	Nov 19 22:33:06 no-preload-178067 crio[580]: time="2025-11-19T22:33:06.632025427Z" level=info msg="Removing container: 880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc" id=60002f2a-7f76-4c8b-8805-18197b45e34c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:33:06 no-preload-178067 crio[580]: time="2025-11-19T22:33:06.654513598Z" level=info msg="Removed container 880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper" id=60002f2a-7f76-4c8b-8805-18197b45e34c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.666097372Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a8aa22cc-3615-4904-9409-93a69fdfdf2f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.667519365Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a850c217-d407-4ed2-aa91-797cd2cdfb25 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.668702679Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2721d545-82bc-4b4d-8bac-989543fb4ad6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.668870612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.67433721Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.674521293Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e87b2122d6da93f4dac3cee3d32b7054b97ad3b896902e000553ed502d8280db/merged/etc/passwd: no such file or directory"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.674549455Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e87b2122d6da93f4dac3cee3d32b7054b97ad3b896902e000553ed502d8280db/merged/etc/group: no such file or directory"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.674788354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.701026321Z" level=info msg="Created container ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048: kube-system/storage-provisioner/storage-provisioner" id=2721d545-82bc-4b4d-8bac-989543fb4ad6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.702523333Z" level=info msg="Starting container: ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048" id=3df70c4d-e68b-4819-a4cf-0bf924d0d7e2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:33:19 no-preload-178067 crio[580]: time="2025-11-19T22:33:19.704878351Z" level=info msg="Started container" PID=1804 containerID=ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048 description=kube-system/storage-provisioner/storage-provisioner id=3df70c4d-e68b-4819-a4cf-0bf924d0d7e2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5dace2361c932931e9efb907fe7f6efa98b7d0bda47515b5fddb3f88c5ba5e5a
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.555212574Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dd310b70-b261-46dd-bfa8-4ff7bf6e57e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.578393634Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0779120a-e5ba-4bf7-966e-616a309a8a88 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.579378994Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper" id=fdecc645-45c7-4b97-b557-662ce7d4ad15 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.579501358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.668696656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.669157066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.822302944Z" level=info msg="Created container 1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper" id=fdecc645-45c7-4b97-b557-662ce7d4ad15 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.823047809Z" level=info msg="Starting container: 1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed" id=639b5ca2-675e-4436-a170-6e55164ca5b4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:33:28 no-preload-178067 crio[580]: time="2025-11-19T22:33:28.825336636Z" level=info msg="Started container" PID=1819 containerID=1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper id=639b5ca2-675e-4436-a170-6e55164ca5b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90007c75ed69c8a90e6e3581234ee52a9e54f5b1fc947d2e0799377a0886fdd6
	Nov 19 22:33:29 no-preload-178067 crio[580]: time="2025-11-19T22:33:29.693120161Z" level=info msg="Removing container: 323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1" id=2b68eeca-aa7f-4a99-801b-084fbaf49db9 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:33:29 no-preload-178067 crio[580]: time="2025-11-19T22:33:29.703030853Z" level=info msg="Removed container 323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b/dashboard-metrics-scraper" id=2b68eeca-aa7f-4a99-801b-084fbaf49db9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1a13a9b1ed1ea       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago       Exited              dashboard-metrics-scraper   3                   90007c75ed69c       dashboard-metrics-scraper-6ffb444bf9-s7v5b   kubernetes-dashboard
	ca69bb794dfbd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   5dace2361c932       storage-provisioner                          kube-system
	1b322960c77f5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   83a06a191146d       kubernetes-dashboard-855c9754f9-c59j5        kubernetes-dashboard
	5d9a3926452fe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   ba7fe49028090       coredns-66bc5c9577-9dwxr                     kube-system
	7b9d1041c9ea2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   77392b0f52c77       busybox                                      default
	63b4d5c69223f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   5dace2361c932       storage-provisioner                          kube-system
	c4eb1fb19b099       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   c3c61b0a43b65       kindnet-4rclw                                kube-system
	86197cbc9c40e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  0                   2490870fe085f       kube-proxy-xll6z                             kube-system
	2b0da6046bd3a       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   0a7e86364e094       kube-apiserver-no-preload-178067             kube-system
	4b15ce24a3aaf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   5ce4a70de4902       etcd-no-preload-178067                       kube-system
	8cdd1b2386fc9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   9bc3df812c114       kube-scheduler-no-preload-178067             kube-system
	a8dcf65794e21       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   025ee82a1f333       kube-controller-manager-no-preload-178067    kube-system
	
	
	==> coredns [5d9a3926452fe1153e2f2a4f626a6a7edc0937440208143a2bbde7bf7330c415] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46928 - 20273 "HINFO IN 1005414316487781050.7694635695379333558. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.12614137s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-178067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-178067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=no-preload-178067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_31_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:31:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-178067
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:33:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:33:18 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:33:18 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:33:18 +0000   Wed, 19 Nov 2025 22:31:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:33:18 +0000   Wed, 19 Nov 2025 22:32:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-178067
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                4f7d1af3-d456-499c-ab45-67c0314eb59f
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-9dwxr                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-178067                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-4rclw                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-178067              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-178067     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-xll6z                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-178067              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-s7v5b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-c59j5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  117s               kubelet          Node no-preload-178067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s               kubelet          Node no-preload-178067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s               kubelet          Node no-preload-178067 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node no-preload-178067 event: Registered Node no-preload-178067 in Controller
	  Normal  NodeReady                98s                kubelet          Node no-preload-178067 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node no-preload-178067 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node no-preload-178067 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node no-preload-178067 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node no-preload-178067 event: Registered Node no-preload-178067 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [4b15ce24a3aaf48f3b98e89cd8a66d0595225b1070cc8af2af5fbc40d5f34ef7] <==
	{"level":"warn","ts":"2025-11-19T22:32:47.427124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.434437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.441257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.447491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.453343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.459558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.474514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.480327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.487031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:32:47.535632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:06.246647Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.618262ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.18798931978aef76\" limit:1 ","response":"range_response_count:1 size:874"}
	{"level":"warn","ts":"2025-11-19T22:33:06.246712Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.64908ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-9dwxr\" limit:1 ","response":"range_response_count:1 size:5935"}
	{"level":"info","ts":"2025-11-19T22:33:06.246758Z","caller":"traceutil/trace.go:172","msg":"trace[1404333613] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-9dwxr; range_end:; response_count:1; response_revision:603; }","duration":"142.70481ms","start":"2025-11-19T22:33:06.104045Z","end":"2025-11-19T22:33:06.246750Z","steps":["trace[1404333613] 'range keys from in-memory index tree'  (duration: 142.470259ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:06.246729Z","caller":"traceutil/trace.go:172","msg":"trace[655126030] range","detail":"{range_begin:/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.18798931978aef76; range_end:; response_count:1; response_revision:603; }","duration":"129.713433ms","start":"2025-11-19T22:33:06.117003Z","end":"2025-11-19T22:33:06.246717Z","steps":["trace[655126030] 'range keys from in-memory index tree'  (duration: 129.498234ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:28.974973Z","caller":"traceutil/trace.go:172","msg":"trace[1420919189] linearizableReadLoop","detail":"{readStateIndex:663; appliedIndex:663; }","duration":"117.389074ms","start":"2025-11-19T22:33:28.857550Z","end":"2025-11-19T22:33:28.974940Z","steps":["trace[1420919189] 'read index received'  (duration: 117.378774ms)","trace[1420919189] 'applied index is now lower than readState.Index'  (duration: 8.72µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:33:29.121228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"263.656519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T22:33:29.121303Z","caller":"traceutil/trace.go:172","msg":"trace[1351738011] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:628; }","duration":"263.740379ms","start":"2025-11-19T22:33:28.857546Z","end":"2025-11-19T22:33:29.121287Z","steps":["trace[1351738011] 'agreement among raft nodes before linearized reading'  (duration: 117.497302ms)","trace[1351738011] 'range keys from in-memory index tree'  (duration: 146.128651ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:33:29.121864Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.147717ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790131555747815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.1879893197dcd9cd\" mod_revision:605 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.1879893197dcd9cd\" value_size:743 lease:4650418094700971391 >> failure:<request_range:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b.1879893197dcd9cd\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-19T22:33:29.121978Z","caller":"traceutil/trace.go:172","msg":"trace[1860012837] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"268.880791ms","start":"2025-11-19T22:33:28.853081Z","end":"2025-11-19T22:33:29.121961Z","steps":["trace[1860012837] 'process raft request'  (duration: 122.030557ms)","trace[1860012837] 'compare'  (duration: 146.068374ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:33:29.122055Z","caller":"traceutil/trace.go:172","msg":"trace[992149599] linearizableReadLoop","detail":"{readStateIndex:664; appliedIndex:663; }","duration":"146.994654ms","start":"2025-11-19T22:33:28.975042Z","end":"2025-11-19T22:33:29.122037Z","steps":["trace[992149599] 'read index received'  (duration: 24.979771ms)","trace[992149599] 'applied index is now lower than readState.Index'  (duration: 122.013178ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-19T22:33:29.122106Z","caller":"traceutil/trace.go:172","msg":"trace[269193656] transaction","detail":"{read_only:false; response_revision:632; number_of_response:1; }","duration":"262.431371ms","start":"2025-11-19T22:33:28.859667Z","end":"2025-11-19T22:33:29.122098Z","steps":["trace[269193656] 'process raft request'  (duration: 262.39437ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:33:29.122228Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"210.24163ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper\" limit:1 ","response":"range_response_count:1 size:842"}
	{"level":"info","ts":"2025-11-19T22:33:29.122233Z","caller":"traceutil/trace.go:172","msg":"trace[216327572] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"264.016237ms","start":"2025-11-19T22:33:28.858205Z","end":"2025-11-19T22:33:29.122221Z","steps":["trace[216327572] 'process raft request'  (duration: 263.745664ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:29.122257Z","caller":"traceutil/trace.go:172","msg":"trace[1508567573] range","detail":"{range_begin:/registry/services/endpoints/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:632; }","duration":"210.277826ms","start":"2025-11-19T22:33:28.911971Z","end":"2025-11-19T22:33:29.122249Z","steps":["trace[1508567573] 'agreement among raft nodes before linearized reading'  (duration: 210.161953ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:29.122271Z","caller":"traceutil/trace.go:172","msg":"trace[1127934729] transaction","detail":"{read_only:false; response_revision:631; number_of_response:1; }","duration":"263.942181ms","start":"2025-11-19T22:33:28.858319Z","end":"2025-11-19T22:33:29.122261Z","steps":["trace[1127934729] 'process raft request'  (duration: 263.700777ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:33:47 up  1:16,  0 user,  load average: 2.27, 2.65, 1.80
	Linux no-preload-178067 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c4eb1fb19b099d7480679ca495008b509002cc63b9e988d15483d29f4cffa841] <==
	I1119 22:32:49.114712       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:32:49.142703       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:32:49.142862       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:32:49.142880       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:32:49.142899       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:32:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:32:49.313596       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:32:49.313620       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:32:49.313633       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:32:49.313757       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:32:49.613708       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:32:49.613730       1 metrics.go:72] Registering metrics
	I1119 22:32:49.613846       1 controller.go:711] "Syncing nftables rules"
	I1119 22:32:59.313959       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:32:59.314016       1 main.go:301] handling current node
	I1119 22:33:09.316927       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:33:09.316973       1 main.go:301] handling current node
	I1119 22:33:19.313446       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:33:19.313485       1 main.go:301] handling current node
	I1119 22:33:29.314024       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:33:29.314062       1 main.go:301] handling current node
	I1119 22:33:39.316415       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1119 22:33:39.316444       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b0da6046bd3a9d1409a02171cd110e7f7c80d13375006ef7726a6948b964a45] <==
	I1119 22:32:47.992691       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	E1119 22:32:47.995797       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:32:47.996625       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:32:47.998604       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:32:48.004754       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:32:48.004825       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:32:48.005733       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:32:48.005942       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 22:32:48.006020       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1119 22:32:48.006104       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:32:48.006129       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:32:48.006135       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:32:48.006145       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:32:48.026579       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:32:48.285767       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:32:48.311626       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:32:48.328570       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:32:48.334126       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:32:48.340694       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:32:48.373916       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.44.57"}
	I1119 22:32:48.383256       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.13.160"}
	I1119 22:32:48.901343       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:32:51.465335       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:32:51.864723       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:32:51.914070       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a8dcf65794e2178ac75421c7fa689f31104856b8f819faab188b47806609c062] <==
	I1119 22:32:51.312280       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:32:51.312366       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-178067"
	I1119 22:32:51.312422       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:32:51.313022       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:32:51.317385       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:32:51.321642       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:32:51.361210       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:32:51.361226       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:32:51.361247       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:32:51.361261       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:32:51.361208       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 22:32:51.361329       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:32:51.361338       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:32:51.361346       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:32:51.361643       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:32:51.361977       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:32:51.362193       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 22:32:51.362558       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:32:51.363666       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:32:51.365567       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 22:32:51.368853       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:32:51.378106       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:32:51.380355       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:32:51.382620       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:32:51.391896       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [86197cbc9c40eb4956802a892d3451ccc5f998c8c7d732efd889058c5af9dc86] <==
	I1119 22:32:48.958251       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:32:49.032760       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:32:49.133286       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:32:49.133313       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:32:49.133414       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:32:49.150866       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:32:49.150915       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:32:49.155938       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:32:49.156378       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:32:49.156406       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:32:49.157532       1 config.go:200] "Starting service config controller"
	I1119 22:32:49.157563       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:32:49.157564       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:32:49.157589       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:32:49.157616       1 config.go:309] "Starting node config controller"
	I1119 22:32:49.157625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:32:49.157632       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:32:49.157690       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:32:49.157717       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:32:49.258714       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:32:49.258752       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:32:49.258716       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [8cdd1b2386fc9d6e80ae7431ec6d46c12963b7da1447247ecf7b9cd33805a53e] <==
	I1119 22:32:46.574505       1 serving.go:386] Generated self-signed cert in-memory
	I1119 22:32:47.966415       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:32:47.966437       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:32:47.971857       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1119 22:32:47.971887       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1119 22:32:47.971923       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:32:47.971932       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:32:47.971947       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:32:47.971953       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:32:47.972104       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:32:47.972194       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:32:48.072542       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1119 22:32:48.072573       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1119 22:32:48.072608       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:32:55 no-preload-178067 kubelet[728]: I1119 22:32:55.598480     728 scope.go:117] "RemoveContainer" containerID="86f06e16e9f2d3272e29039cfc54d8e3badf0c15bc5b1d8d7ad65819a7ecd41b"
	Nov 19 22:32:56 no-preload-178067 kubelet[728]: I1119 22:32:56.603091     728 scope.go:117] "RemoveContainer" containerID="86f06e16e9f2d3272e29039cfc54d8e3badf0c15bc5b1d8d7ad65819a7ecd41b"
	Nov 19 22:32:56 no-preload-178067 kubelet[728]: I1119 22:32:56.603252     728 scope.go:117] "RemoveContainer" containerID="880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc"
	Nov 19 22:32:56 no-preload-178067 kubelet[728]: E1119 22:32:56.603490     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:32:57 no-preload-178067 kubelet[728]: I1119 22:32:57.608269     728 scope.go:117] "RemoveContainer" containerID="880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc"
	Nov 19 22:32:57 no-preload-178067 kubelet[728]: E1119 22:32:57.608441     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:32:58 no-preload-178067 kubelet[728]: I1119 22:32:58.736362     728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 22:32:59 no-preload-178067 kubelet[728]: I1119 22:32:59.623495     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-c59j5" podStartSLOduration=2.05186628 podStartE2EDuration="8.623474646s" podCreationTimestamp="2025-11-19 22:32:51 +0000 UTC" firstStartedPulling="2025-11-19 22:32:52.16004209 +0000 UTC m=+6.712449093" lastFinishedPulling="2025-11-19 22:32:58.731650442 +0000 UTC m=+13.284057459" observedRunningTime="2025-11-19 22:32:59.623245751 +0000 UTC m=+14.175652776" watchObservedRunningTime="2025-11-19 22:32:59.623474646 +0000 UTC m=+14.175881669"
	Nov 19 22:33:05 no-preload-178067 kubelet[728]: I1119 22:33:05.879090     728 scope.go:117] "RemoveContainer" containerID="880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc"
	Nov 19 22:33:06 no-preload-178067 kubelet[728]: I1119 22:33:06.630677     728 scope.go:117] "RemoveContainer" containerID="880c7b57f2efdc9c6ffe3a69810af6eff3881d8549591192e62888d2f8df29bc"
	Nov 19 22:33:06 no-preload-178067 kubelet[728]: I1119 22:33:06.630927     728 scope.go:117] "RemoveContainer" containerID="323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1"
	Nov 19 22:33:06 no-preload-178067 kubelet[728]: E1119 22:33:06.631135     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:33:15 no-preload-178067 kubelet[728]: I1119 22:33:15.878983     728 scope.go:117] "RemoveContainer" containerID="323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1"
	Nov 19 22:33:15 no-preload-178067 kubelet[728]: E1119 22:33:15.879162     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:33:19 no-preload-178067 kubelet[728]: I1119 22:33:19.665483     728 scope.go:117] "RemoveContainer" containerID="63b4d5c69223fdefa7ca853e7e38f705bdc5541b5c4cdcb98fb26b40f27b3d10"
	Nov 19 22:33:28 no-preload-178067 kubelet[728]: I1119 22:33:28.554712     728 scope.go:117] "RemoveContainer" containerID="323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1"
	Nov 19 22:33:29 no-preload-178067 kubelet[728]: I1119 22:33:29.691759     728 scope.go:117] "RemoveContainer" containerID="323ef44cbf36608103ce19138fb515834c62b48b3da5f74b785698e10785aac1"
	Nov 19 22:33:29 no-preload-178067 kubelet[728]: I1119 22:33:29.692034     728 scope.go:117] "RemoveContainer" containerID="1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed"
	Nov 19 22:33:29 no-preload-178067 kubelet[728]: E1119 22:33:29.692225     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:33:35 no-preload-178067 kubelet[728]: I1119 22:33:35.878795     728 scope.go:117] "RemoveContainer" containerID="1a13a9b1ed1ea1702db4123a2bd8ccbd0aa24f48d9195605f4565956783c52ed"
	Nov 19 22:33:35 no-preload-178067 kubelet[728]: E1119 22:33:35.878957     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-s7v5b_kubernetes-dashboard(fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-s7v5b" podUID="fdbf8a63-bb6c-43dc-a39f-04d8ee4b8ee4"
	Nov 19 22:33:42 no-preload-178067 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:33:42 no-preload-178067 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:33:42 no-preload-178067 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:33:42 no-preload-178067 systemd[1]: kubelet.service: Consumed 1.624s CPU time.
	
	
	==> kubernetes-dashboard [1b322960c77f50cdccffcfe8abe1d997e9c28f67a27b18ffb8d0b3ecb03a0409] <==
	2025/11/19 22:32:58 Using namespace: kubernetes-dashboard
	2025/11/19 22:32:58 Using in-cluster config to connect to apiserver
	2025/11/19 22:32:58 Using secret token for csrf signing
	2025/11/19 22:32:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:32:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:32:58 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 22:32:58 Generating JWE encryption key
	2025/11/19 22:32:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:32:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:32:58 Initializing JWE encryption key from synchronized object
	2025/11/19 22:32:58 Creating in-cluster Sidecar client
	2025/11/19 22:32:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:32:58 Serving insecurely on HTTP port: 9090
	2025/11/19 22:33:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:32:58 Starting overwatch
	
	
	==> storage-provisioner [63b4d5c69223fdefa7ca853e7e38f705bdc5541b5c4cdcb98fb26b40f27b3d10] <==
	I1119 22:32:48.921627       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:33:18.925436       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca69bb794dfbdb19cc9d54b5b28bbc9d94279dffeb4a9e6a23344c42401ad048] <==
	I1119 22:33:19.728589       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:33:19.728645       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:33:19.730982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:23.186003       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:27.446605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:31.045289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:34.099273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:37.121473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:37.127742       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:33:37.127944       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:33:37.128142       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-178067_3bcf78ed-8446-4d6c-b59a-90fe7ff8724f!
	I1119 22:33:37.128647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"410535e3-f1a2-4daf-93d0-dd88f3003fa0", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-178067_3bcf78ed-8446-4d6c-b59a-90fe7ff8724f became leader
	W1119 22:33:37.132163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:37.135156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:33:37.228721       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-178067_3bcf78ed-8446-4d6c-b59a-90fe7ff8724f!
	W1119 22:33:39.138111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:39.142452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:41.146135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:41.150407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:43.154367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:43.159689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:45.163726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:45.168997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:47.172456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:47.176598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-178067 -n no-preload-178067
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-178067 -n no-preload-178067: exit status 2 (335.247345ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-178067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.111089ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:33:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-443380 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-443380 describe deploy/metrics-server -n kube-system: exit status 1 (64.518731ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-443380 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-443380
helpers_test.go:243: (dbg) docker inspect embed-certs-443380:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49",
	        "Created": "2025-11-19T22:33:06.74702883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 253056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:33:06.778247825Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/hosts",
	        "LogPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49-json.log",
	        "Name": "/embed-certs-443380",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-443380:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-443380",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49",
	                "LowerDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-443380",
	                "Source": "/var/lib/docker/volumes/embed-certs-443380/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-443380",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-443380",
	                "name.minikube.sigs.k8s.io": "embed-certs-443380",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "539ee6e1fb4a36d381b60d664c714db229e5ed78203d9b25b003a183eb4a7d01",
	            "SandboxKey": "/var/run/docker/netns/539ee6e1fb4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-443380": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "79be9ba27c325ef564b730d7c6a14208f6797c8013b71ad28befe3377b076629",
	                    "EndpointID": "7f89a7fd070fd3924b091afd980ed702f1d27fd833cf1d4e5fb2eff8c73b47fe",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "e2:df:13:04:2b:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-443380",
	                        "f1d90b7b5af6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443380 -n embed-certs-443380
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-443380 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p NoKubernetes-662839                                                                                                                                                                                                                        │ NoKubernetes-662839          │ jenkins │ v1.37.0 │ 19 Nov 25 22:30 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ delete  │ -p missing-upgrade-015670                                                                                                                                                                                                                     │ missing-upgrade-015670       │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:31 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:31 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-680619 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p old-k8s-version-680619 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-680619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p no-preload-178067 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p no-preload-178067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p disable-driver-mounts-726490                                                                                                                                                                                                               │ disable-driver-mounts-726490 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:33:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:33:23.883705  257842 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:33:23.883983  257842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:23.883993  257842 out.go:374] Setting ErrFile to fd 2...
	I1119 22:33:23.883997  257842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:33:23.884187  257842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:33:23.884673  257842 out.go:368] Setting JSON to false
	I1119 22:33:23.885756  257842 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4552,"bootTime":1763587052,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:33:23.885849  257842 start.go:143] virtualization: kvm guest
	I1119 22:33:23.887726  257842 out.go:179] * [default-k8s-diff-port-409987] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:33:23.889070  257842 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:33:23.889070  257842 notify.go:221] Checking for updates...
	I1119 22:33:23.891485  257842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:33:23.892734  257842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:33:23.893909  257842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:33:23.895062  257842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:33:23.896153  257842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:33:23.897750  257842 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.897897  257842 config.go:182] Loaded profile config "kubernetes-upgrade-801704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.898024  257842 config.go:182] Loaded profile config "no-preload-178067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:23.898147  257842 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:33:23.925695  257842 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:33:23.925842  257842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:23.983931  257842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:23.974160621 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:23.984034  257842 docker.go:319] overlay module found
	I1119 22:33:23.985686  257842 out.go:179] * Using the docker driver based on user configuration
	I1119 22:33:23.986806  257842 start.go:309] selected driver: docker
	I1119 22:33:23.986842  257842 start.go:930] validating driver "docker" against <nil>
	I1119 22:33:23.986855  257842 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:33:23.987349  257842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:33:24.044957  257842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:33:24.035470502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:33:24.045358  257842 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:33:24.045644  257842 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:33:24.047182  257842 out.go:179] * Using Docker driver with root privileges
	I1119 22:33:24.048300  257842 cni.go:84] Creating CNI manager for ""
	I1119 22:33:24.048398  257842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:24.048413  257842 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 22:33:24.048479  257842 start.go:353] cluster config:
	{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:24.049668  257842 out.go:179] * Starting "default-k8s-diff-port-409987" primary control-plane node in "default-k8s-diff-port-409987" cluster
	I1119 22:33:24.050617  257842 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:33:24.051685  257842 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:33:24.052672  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:24.052710  257842 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:33:24.052717  257842 cache.go:65] Caching tarball of preloaded images
	I1119 22:33:24.052766  257842 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:33:24.052856  257842 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:33:24.052873  257842 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:33:24.052980  257842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:33:24.053013  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json: {Name:mkd16b9878826f2245b2c07a772bd12235442172 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:24.072676  257842 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:33:24.072691  257842 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:33:24.072705  257842 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:33:24.072727  257842 start.go:360] acquireMachinesLock for default-k8s-diff-port-409987: {Name:mk3691865877e78ad0fe52d2c0e71ee1c1c3699a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:33:24.072831  257842 start.go:364] duration metric: took 71.579µs to acquireMachinesLock for "default-k8s-diff-port-409987"
	I1119 22:33:24.072860  257842 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:33:24.072935  257842 start.go:125] createHost starting for "" (driver="docker")
	I1119 22:33:21.846845  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:22.347017  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:22.847034  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:23.346898  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:23.846436  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:24.346943  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:24.846671  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:25.346975  252325 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:25.420030  252325 kubeadm.go:1114] duration metric: took 4.651844422s to wait for elevateKubeSystemPrivileges
	I1119 22:33:25.420066  252325 kubeadm.go:403] duration metric: took 14.384664171s to StartCluster
	I1119 22:33:25.420088  252325 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:25.420154  252325 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:33:25.422122  252325 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:25.422376  252325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:33:25.422394  252325 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:33:25.422458  252325 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:33:25.422555  252325 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-443380"
	I1119 22:33:25.422587  252325 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-443380"
	I1119 22:33:25.422585  252325 addons.go:70] Setting default-storageclass=true in profile "embed-certs-443380"
	I1119 22:33:25.422605  252325 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:25.422616  252325 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-443380"
	I1119 22:33:25.422620  252325 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:33:25.423009  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.423154  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.425572  252325 out.go:179] * Verifying Kubernetes components...
	I1119 22:33:25.427178  252325 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:25.446337  252325 addons.go:239] Setting addon default-storageclass=true in "embed-certs-443380"
	I1119 22:33:25.446384  252325 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:33:25.446890  252325 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:33:25.448940  252325 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:33:25.450228  252325 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:33:25.450251  252325 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:33:25.450306  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:25.480574  252325 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:33:25.480600  252325 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:33:25.480661  252325 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:33:25.481387  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:25.506078  252325 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:33:25.523359  252325 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:33:25.586976  252325 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:33:25.611710  252325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:33:25.635667  252325 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:33:25.747803  252325 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1119 22:33:25.750001  252325 node_ready.go:35] waiting up to 6m0s for node "embed-certs-443380" to be "Ready" ...
	I1119 22:33:25.969838  252325 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:33:25.970910  252325 addons.go:515] duration metric: took 548.451841ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:33:26.253634  252325 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-443380" context rescaled to 1 replicas
	I1119 22:33:22.382769  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:22.383154  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:22.383202  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:22.383251  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:22.412635  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:22.412654  229026 cri.go:89] found id: ""
	I1119 22:33:22.412662  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:22.412702  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.416473  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:22.416531  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:22.442074  229026 cri.go:89] found id: ""
	I1119 22:33:22.442093  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.442100  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:22.442105  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:22.442152  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:22.467611  229026 cri.go:89] found id: ""
	I1119 22:33:22.467633  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.467641  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:22.467648  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:22.467703  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:22.494154  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:22.494172  229026 cri.go:89] found id: ""
	I1119 22:33:22.494180  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:22.494229  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.497892  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:22.497950  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:22.523686  229026 cri.go:89] found id: ""
	I1119 22:33:22.523711  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.523720  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:22.523729  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:22.523785  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:22.549770  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:22.549794  229026 cri.go:89] found id: "c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:22.549799  229026 cri.go:89] found id: ""
	I1119 22:33:22.549810  229026 logs.go:282] 2 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a]
	I1119 22:33:22.549889  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.554433  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:22.558149  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:22.558194  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:22.594272  229026 cri.go:89] found id: ""
	I1119 22:33:22.594299  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.594309  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:22.594317  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:22.594359  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:22.625976  229026 cri.go:89] found id: ""
	I1119 22:33:22.626001  229026 logs.go:282] 0 containers: []
	W1119 22:33:22.626012  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:22.626027  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:22.626038  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:22.660094  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:22.660123  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:22.676931  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:22.676957  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:22.733420  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:22.733439  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:22.733450  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:22.765920  229026 logs.go:123] Gathering logs for kube-controller-manager [c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a] ...
	I1119 22:33:22.765952  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c2899a28680b9c8854752507e571ccefad28a24f07680530744aed998a92278a"
	I1119 22:33:22.791770  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:22.791795  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:22.832968  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:22.832994  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:22.920507  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:22.920540  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:22.985203  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:22.985241  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.512901  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:25.514058  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:25.514118  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:25.514214  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:25.556844  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:25.556876  229026 cri.go:89] found id: ""
	I1119 22:33:25.556887  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:25.556952  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.562892  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:25.562953  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:25.605067  229026 cri.go:89] found id: ""
	I1119 22:33:25.605124  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.605136  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:25.605145  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:25.605204  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:25.644356  229026 cri.go:89] found id: ""
	I1119 22:33:25.644385  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.644395  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:25.644403  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:25.644460  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:25.683152  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:25.683178  229026 cri.go:89] found id: ""
	I1119 22:33:25.683273  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:25.683342  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.688089  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:25.688208  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:25.725026  229026 cri.go:89] found id: ""
	I1119 22:33:25.725056  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.725065  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:25.725073  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:25.725244  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:25.761160  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.761204  229026 cri.go:89] found id: ""
	I1119 22:33:25.761216  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:25.761282  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:25.766966  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:25.767028  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:25.804510  229026 cri.go:89] found id: ""
	I1119 22:33:25.804540  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.804551  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:25.804559  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:25.804622  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:25.837652  229026 cri.go:89] found id: ""
	I1119 22:33:25.837679  229026 logs.go:282] 0 containers: []
	W1119 22:33:25.837701  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:25.837712  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:25.837726  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:25.892405  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:25.892441  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:25.927183  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:25.927223  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:25.982585  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:25.982613  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:26.013887  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:26.013923  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:26.098577  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:26.098611  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:26.115217  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:26.115244  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:26.178958  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:26.178984  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:26.179005  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	W1119 22:33:23.608027  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:25.612411  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	W1119 22:33:28.107283  247081 pod_ready.go:104] pod "coredns-66bc5c9577-9dwxr" is not "Ready", error: <nil>
	I1119 22:33:24.074503  257842 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1119 22:33:24.074696  257842 start.go:159] libmachine.API.Create for "default-k8s-diff-port-409987" (driver="docker")
	I1119 22:33:24.074724  257842 client.go:173] LocalClient.Create starting
	I1119 22:33:24.074791  257842 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem
	I1119 22:33:24.074871  257842 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:24.074891  257842 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:24.074944  257842 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem
	I1119 22:33:24.074966  257842 main.go:143] libmachine: Decoding PEM data...
	I1119 22:33:24.074977  257842 main.go:143] libmachine: Parsing certificate...
	I1119 22:33:24.075254  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1119 22:33:24.091285  257842 cli_runner.go:211] docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1119 22:33:24.091350  257842 network_create.go:284] running [docker network inspect default-k8s-diff-port-409987] to gather additional debugging logs...
	I1119 22:33:24.091365  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987
	W1119 22:33:24.108545  257842 cli_runner.go:211] docker network inspect default-k8s-diff-port-409987 returned with exit code 1
	I1119 22:33:24.108572  257842 network_create.go:287] error running [docker network inspect default-k8s-diff-port-409987]: docker network inspect default-k8s-diff-port-409987: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-409987 not found
	I1119 22:33:24.108587  257842 network_create.go:289] output of [docker network inspect default-k8s-diff-port-409987]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-409987 not found
	
	** /stderr **
	I1119 22:33:24.108708  257842 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:24.125616  257842 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cde0f356bd10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b5:fa:ba:e0:a6} reservation:<nil>}
	I1119 22:33:24.126341  257842 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-47fb5ce24a02 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:86:30:91:0e:d6:d9} reservation:<nil>}
	I1119 22:33:24.127005  257842 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-2592199ffac9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:9b:dd:65:07:28} reservation:<nil>}
	I1119 22:33:24.127748  257842 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f40680}
	I1119 22:33:24.127768  257842 network_create.go:124] attempt to create docker network default-k8s-diff-port-409987 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1119 22:33:24.127824  257842 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 default-k8s-diff-port-409987
	I1119 22:33:24.174801  257842 network_create.go:108] docker network default-k8s-diff-port-409987 192.168.76.0/24 created
	I1119 22:33:24.174930  257842 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-409987" container
	I1119 22:33:24.174986  257842 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1119 22:33:24.193121  257842 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-409987 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --label created_by.minikube.sigs.k8s.io=true
	I1119 22:33:24.209597  257842 oci.go:103] Successfully created a docker volume default-k8s-diff-port-409987
	I1119 22:33:24.209672  257842 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-409987-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --entrypoint /usr/bin/test -v default-k8s-diff-port-409987:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -d /var/lib
	I1119 22:33:24.605177  257842 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-409987
	I1119 22:33:24.605252  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:24.605267  257842 kic.go:194] Starting extracting preloaded images to volume ...
	I1119 22:33:24.605340  257842 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-409987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir
	I1119 22:33:29.133052  247081 pod_ready.go:94] pod "coredns-66bc5c9577-9dwxr" is "Ready"
	I1119 22:33:29.133080  247081 pod_ready.go:86] duration metric: took 39.530851945s for pod "coredns-66bc5c9577-9dwxr" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.138098  247081 pod_ready.go:83] waiting for pod "etcd-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.142937  247081 pod_ready.go:94] pod "etcd-no-preload-178067" is "Ready"
	I1119 22:33:29.142962  247081 pod_ready.go:86] duration metric: took 4.839499ms for pod "etcd-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.238949  247081 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.244009  247081 pod_ready.go:94] pod "kube-apiserver-no-preload-178067" is "Ready"
	I1119 22:33:29.244037  247081 pod_ready.go:86] duration metric: took 5.06142ms for pod "kube-apiserver-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.246567  247081 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.305183  247081 pod_ready.go:94] pod "kube-controller-manager-no-preload-178067" is "Ready"
	I1119 22:33:29.305208  247081 pod_ready.go:86] duration metric: took 58.619262ms for pod "kube-controller-manager-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.504991  247081 pod_ready.go:83] waiting for pod "kube-proxy-xll6z" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:29.905540  247081 pod_ready.go:94] pod "kube-proxy-xll6z" is "Ready"
	I1119 22:33:29.905566  247081 pod_ready.go:86] duration metric: took 400.551202ms for pod "kube-proxy-xll6z" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.105246  247081 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.505433  247081 pod_ready.go:94] pod "kube-scheduler-no-preload-178067" is "Ready"
	I1119 22:33:30.505459  247081 pod_ready.go:86] duration metric: took 400.188275ms for pod "kube-scheduler-no-preload-178067" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:30.505470  247081 pod_ready.go:40] duration metric: took 40.906421291s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:30.547626  247081 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:33:30.549623  247081 out.go:179] * Done! kubectl is now configured to use "no-preload-178067" cluster and "default" namespace by default
	W1119 22:33:27.844382  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	W1119 22:33:30.253097  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	I1119 22:33:28.710624  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:28.711065  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:28.711113  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:28.711160  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:28.736722  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:28.736744  229026 cri.go:89] found id: ""
	I1119 22:33:28.736752  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:28.736803  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.741111  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:28.741177  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:28.766295  229026 cri.go:89] found id: ""
	I1119 22:33:28.766319  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.766327  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:28.766333  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:28.766378  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:28.791972  229026 cri.go:89] found id: ""
	I1119 22:33:28.791994  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.792001  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:28.792006  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:28.792056  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:28.818307  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:28.818327  229026 cri.go:89] found id: ""
	I1119 22:33:28.818335  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:28.818394  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.822683  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:28.822764  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:28.856448  229026 cri.go:89] found id: ""
	I1119 22:33:28.856499  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.856510  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:28.856518  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:28.856580  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:28.882557  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:28.882584  229026 cri.go:89] found id: ""
	I1119 22:33:28.882592  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:28.882645  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:28.886479  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:28.886545  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:28.912563  229026 cri.go:89] found id: ""
	I1119 22:33:28.912588  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.912595  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:28.912601  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:28.912644  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:28.937277  229026 cri.go:89] found id: ""
	I1119 22:33:28.937299  229026 logs.go:282] 0 containers: []
	W1119 22:33:28.937306  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:28.937315  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:28.937326  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:28.966343  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:28.966368  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:29.014708  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:29.014743  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:29.040387  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:29.040411  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:29.082359  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:29.082390  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:29.111167  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:29.111194  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:29.215828  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:29.215865  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:29.230491  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:29.230519  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:29.295659  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:29.158194  257842 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-409987:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 -I lz4 -xf /preloaded.tar -C /extractDir: (4.55281447s)
	I1119 22:33:29.158220  257842 kic.go:203] duration metric: took 4.552950236s to extract preloaded images to volume ...
	W1119 22:33:29.158286  257842 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1119 22:33:29.158312  257842 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1119 22:33:29.158344  257842 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1119 22:33:29.217611  257842 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-409987 --name default-k8s-diff-port-409987 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-409987 --network default-k8s-diff-port-409987 --ip 192.168.76.2 --volume default-k8s-diff-port-409987:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865
	I1119 22:33:29.532541  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Running}}
	I1119 22:33:29.551244  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.569223  257842 cli_runner.go:164] Run: docker exec default-k8s-diff-port-409987 stat /var/lib/dpkg/alternatives/iptables
	I1119 22:33:29.614972  257842 oci.go:144] the created container "default-k8s-diff-port-409987" has a running status.
	I1119 22:33:29.614999  257842 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa...
	I1119 22:33:29.811803  257842 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1119 22:33:29.835714  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.852802  257842 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1119 22:33:29.852845  257842 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-409987 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1119 22:33:29.895797  257842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:33:29.913061  257842 machine.go:94] provisionDockerMachine start ...
	I1119 22:33:29.913137  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:29.929995  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:29.930308  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:29.930328  257842 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:33:29.931145  257842 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54386->127.0.0.1:33078: read: connection reset by peer
	I1119 22:33:33.055705  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:33:33.055755  257842 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-409987"
	I1119 22:33:33.055830  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.073640  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.073912  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.073935  257842 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-409987 && echo "default-k8s-diff-port-409987" | sudo tee /etc/hostname
	I1119 22:33:33.206352  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:33:33.206423  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.224632  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.224930  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.224968  257842 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-409987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-409987/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-409987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:33:33.347746  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:33:33.347776  257842 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:33:33.347844  257842 ubuntu.go:190] setting up certificates
	I1119 22:33:33.347867  257842 provision.go:84] configureAuth start
	I1119 22:33:33.347925  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:33.365007  257842 provision.go:143] copyHostCerts
	I1119 22:33:33.365064  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:33:33.365077  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:33:33.365153  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:33:33.365253  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:33:33.365265  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:33:33.365299  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:33:33.365384  257842 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:33:33.365393  257842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:33:33.365439  257842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:33:33.365514  257842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-409987 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-409987 localhost minikube]
	I1119 22:33:33.469295  257842 provision.go:177] copyRemoteCerts
	I1119 22:33:33.469350  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:33:33.469399  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.487180  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:33.579229  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:33:33.598170  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:33:33.615332  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:33:33.631707  257842 provision.go:87] duration metric: took 283.825271ms to configureAuth
	I1119 22:33:33.631738  257842 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:33:33.631927  257842 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:33:33.632038  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.649525  257842 main.go:143] libmachine: Using SSH client type: native
	I1119 22:33:33.649754  257842 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1119 22:33:33.649776  257842 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:33:33.911864  257842 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:33:33.911899  257842 machine.go:97] duration metric: took 3.998818366s to provisionDockerMachine
	I1119 22:33:33.911921  257842 client.go:176] duration metric: took 9.837189219s to LocalClient.Create
	I1119 22:33:33.911944  257842 start.go:167] duration metric: took 9.837246112s to libmachine.API.Create "default-k8s-diff-port-409987"
	I1119 22:33:33.911958  257842 start.go:293] postStartSetup for "default-k8s-diff-port-409987" (driver="docker")
	I1119 22:33:33.911972  257842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:33:33.912049  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:33:33.912100  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:33.930567  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.023978  257842 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:33:34.027239  257842 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:33:34.027262  257842 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:33:34.027271  257842 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:33:34.027334  257842 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:33:34.027439  257842 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:33:34.027574  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:33:34.034703  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:34.053030  257842 start.go:296] duration metric: took 141.059047ms for postStartSetup
	I1119 22:33:34.053328  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:34.071401  257842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:33:34.071655  257842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:33:34.071702  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.089393  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.179354  257842 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:33:34.184028  257842 start.go:128] duration metric: took 10.111081087s to createHost
	I1119 22:33:34.184050  257842 start.go:83] releasing machines lock for "default-k8s-diff-port-409987", held for 10.111205257s
	I1119 22:33:34.184110  257842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:33:34.201570  257842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:33:34.201588  257842 ssh_runner.go:195] Run: cat /version.json
	I1119 22:33:34.201638  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.201643  257842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:33:34.219778  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.220185  257842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:33:34.307356  257842 ssh_runner.go:195] Run: systemctl --version
	I1119 22:33:34.377088  257842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:33:34.409301  257842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:33:34.413564  257842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:33:34.413625  257842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:33:34.439025  257842 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1119 22:33:34.439049  257842 start.go:496] detecting cgroup driver to use...
	I1119 22:33:34.439080  257842 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:33:34.439115  257842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:33:34.453624  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:33:34.464939  257842 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:33:34.464985  257842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:33:34.480085  257842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:33:34.496141  257842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:33:34.577139  257842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:33:34.661485  257842 docker.go:234] disabling docker service ...
	I1119 22:33:34.661548  257842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:33:34.680544  257842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:33:34.693829  257842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:33:34.778614  257842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:33:34.863617  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:33:34.876075  257842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:33:34.890553  257842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:33:34.890610  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.901356  257842 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:33:34.901423  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.910601  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.920150  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.929306  257842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:33:34.937318  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.946730  257842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.960309  257842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:33:34.968769  257842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:33:34.977040  257842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:33:34.984350  257842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:35.075418  257842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:33:35.218176  257842 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:33:35.218239  257842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:33:35.221998  257842 start.go:564] Will wait 60s for crictl version
	I1119 22:33:35.222046  257842 ssh_runner.go:195] Run: which crictl
	I1119 22:33:35.225560  257842 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:33:35.248793  257842 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:33:35.248876  257842 ssh_runner.go:195] Run: crio --version
	I1119 22:33:35.277023  257842 ssh_runner.go:195] Run: crio --version
	I1119 22:33:35.307857  257842 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1119 22:33:32.253373  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	W1119 22:33:34.754649  252325 node_ready.go:57] node "embed-certs-443380" has "Ready":"False" status (will retry)
	I1119 22:33:31.796780  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:31.797236  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:31.797296  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:31.797357  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:31.822313  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:31.822330  229026 cri.go:89] found id: ""
	I1119 22:33:31.822337  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:31.822381  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.825911  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:31.825967  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:31.851805  229026 cri.go:89] found id: ""
	I1119 22:33:31.851852  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.851859  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:31.851864  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:31.851918  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:31.877079  229026 cri.go:89] found id: ""
	I1119 22:33:31.877100  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.877107  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:31.877113  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:31.877160  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:31.901847  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:31.901864  229026 cri.go:89] found id: ""
	I1119 22:33:31.901871  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:31.901909  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.906013  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:31.906067  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:31.930107  229026 cri.go:89] found id: ""
	I1119 22:33:31.930128  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.930137  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:31.930144  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:31.930183  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:31.954253  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:31.954272  229026 cri.go:89] found id: ""
	I1119 22:33:31.954291  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:31.954347  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:31.957894  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:31.957950  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:31.982148  229026 cri.go:89] found id: ""
	I1119 22:33:31.982171  229026 logs.go:282] 0 containers: []
	W1119 22:33:31.982181  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:31.982187  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:31.982232  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:32.010776  229026 cri.go:89] found id: ""
	I1119 22:33:32.010801  229026 logs.go:282] 0 containers: []
	W1119 22:33:32.010809  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:32.010835  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:32.010850  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:32.036144  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:32.036167  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:32.078660  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:32.078684  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:32.106831  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:32.106857  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:32.189849  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:32.189874  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:32.203302  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:32.203326  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:32.257080  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:32.257098  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:32.257112  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:32.289358  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:32.289436  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:34.836503  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:34.836865  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:34.836919  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:34.836974  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:34.864697  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:34.864716  229026 cri.go:89] found id: ""
	I1119 22:33:34.864726  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:34.864788  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:34.868370  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:34.868423  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:34.894465  229026 cri.go:89] found id: ""
	I1119 22:33:34.894487  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.894498  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:34.894505  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:34.894555  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:34.922777  229026 cri.go:89] found id: ""
	I1119 22:33:34.922798  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.922810  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:34.922835  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:34.922886  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:34.949441  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:34.949462  229026 cri.go:89] found id: ""
	I1119 22:33:34.949471  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:34.949515  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:34.952986  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:34.953034  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:34.978855  229026 cri.go:89] found id: ""
	I1119 22:33:34.978885  229026 logs.go:282] 0 containers: []
	W1119 22:33:34.978896  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:34.978905  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:34.978956  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:35.004626  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:35.004650  229026 cri.go:89] found id: ""
	I1119 22:33:35.004658  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:35.004709  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:35.008905  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:35.008961  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:35.039110  229026 cri.go:89] found id: ""
	I1119 22:33:35.039132  229026 logs.go:282] 0 containers: []
	W1119 22:33:35.039141  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:35.039149  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:35.039202  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:35.065661  229026 cri.go:89] found id: ""
	I1119 22:33:35.065694  229026 logs.go:282] 0 containers: []
	W1119 22:33:35.065705  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:35.065719  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:35.065741  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:35.095020  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:35.095050  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:35.143773  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:35.143802  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:35.174044  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:35.174078  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:35.265375  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:35.265400  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:35.280716  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:35.280744  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:35.339887  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:35.339905  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:35.339919  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:35.375008  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:35.375028  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:35.308950  257842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:33:35.327275  257842 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:33:35.331352  257842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:35.342840  257842 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:33:35.343008  257842 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:33:35.343065  257842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:35.374136  257842 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:35.374157  257842 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:33:35.374203  257842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:33:35.399179  257842 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:33:35.399198  257842 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:33:35.399205  257842 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1119 22:33:35.399280  257842 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-409987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:33:35.399339  257842 ssh_runner.go:195] Run: crio config
	I1119 22:33:35.444494  257842 cni.go:84] Creating CNI manager for ""
	I1119 22:33:35.444513  257842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:35.444528  257842 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:33:35.444547  257842 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-409987 NodeName:default-k8s-diff-port-409987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:33:35.444673  257842 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-409987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:33:35.444731  257842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:33:35.452420  257842 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:33:35.452477  257842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:33:35.459942  257842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 22:33:35.471786  257842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:33:35.486354  257842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 22:33:35.497770  257842 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:33:35.501361  257842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:33:35.510565  257842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:33:35.589911  257842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:33:35.612829  257842 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987 for IP: 192.168.76.2
	I1119 22:33:35.612849  257842 certs.go:195] generating shared ca certs ...
	I1119 22:33:35.612868  257842 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:35.613005  257842 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:33:35.613069  257842 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:33:35.613084  257842 certs.go:257] generating profile certs ...
	I1119 22:33:35.613150  257842 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key
	I1119 22:33:35.613176  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt with IP's: []
	I1119 22:33:36.259839  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt ...
	I1119 22:33:36.259864  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.crt: {Name:mk51645faa5989875e782e359a15271baba6c64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.260055  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key ...
	I1119 22:33:36.260072  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key: {Name:mkcbdf4025b10d73f6acb70bea0cad4aaaa9a2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.260192  257842 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832
	I1119 22:33:36.260218  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1119 22:33:36.935157  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 ...
	I1119 22:33:36.935185  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832: {Name:mka229d41a2be07fe6a31ff8c42ef5ff6a82a36c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.935348  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832 ...
	I1119 22:33:36.935366  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832: {Name:mk46e2ff9da97b96045d25f2b413ce78625d779e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:36.935473  257842 certs.go:382] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt.e1aaa832 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt
	I1119 22:33:36.935578  257842 certs.go:386] copying /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832 -> /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key
	I1119 22:33:36.935666  257842 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key
	I1119 22:33:36.935689  257842 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt with IP's: []
	I1119 22:33:37.249125  257842 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt ...
	I1119 22:33:37.249156  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt: {Name:mkea403caf60bc3ff91af8eead4c159ce9fb0ac6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:37.249328  257842 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key ...
	I1119 22:33:37.249343  257842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key: {Name:mk241b53e3e9b76398e3ef0e5e4da30803b4e527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:33:37.249518  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:33:37.249551  257842 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:33:37.249561  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:33:37.249581  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:33:37.249602  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:33:37.249623  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:33:37.249663  257842 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:33:37.250283  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:33:37.269810  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:33:37.288148  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:33:37.304313  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:33:37.320191  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:33:37.336074  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:33:37.352096  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:33:37.367943  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:33:37.383751  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:33:37.401518  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:33:37.417219  257842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:33:37.433104  257842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:33:37.444411  257842 ssh_runner.go:195] Run: openssl version
	I1119 22:33:37.449967  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:33:37.457635  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.460958  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.461009  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:33:37.495263  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:33:37.503611  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:33:37.511484  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.515181  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.515229  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:33:37.549064  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:33:37.557207  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:33:37.565171  257842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.568456  257842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.568497  257842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:33:37.602554  257842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:33:37.610380  257842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:33:37.613736  257842 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1119 22:33:37.613789  257842 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:33:37.613885  257842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:33:37.613954  257842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:33:37.640740  257842 cri.go:89] found id: ""
	I1119 22:33:37.640811  257842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:33:37.648414  257842 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1119 22:33:37.655868  257842 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1119 22:33:37.655906  257842 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1119 22:33:37.662990  257842 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1119 22:33:37.663005  257842 kubeadm.go:158] found existing configuration files:
	
	I1119 22:33:37.663036  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1119 22:33:37.670124  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1119 22:33:37.670173  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1119 22:33:37.677380  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1119 22:33:37.686619  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1119 22:33:37.686669  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1119 22:33:37.693439  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1119 22:33:37.700485  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1119 22:33:37.700520  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1119 22:33:37.707378  257842 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1119 22:33:37.714951  257842 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1119 22:33:37.714984  257842 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1119 22:33:37.721780  257842 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1119 22:33:37.759244  257842 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:33:37.759294  257842 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:33:37.786995  257842 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:33:37.787082  257842 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:33:37.787129  257842 kubeadm.go:319] OS: Linux
	I1119 22:33:37.787187  257842 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:33:37.787260  257842 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:33:37.787357  257842 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:33:37.787443  257842 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:33:37.787529  257842 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:33:37.787609  257842 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:33:37.787686  257842 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:33:37.787779  257842 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:33:37.851453  257842 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:33:37.851600  257842 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:33:37.851724  257842 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:33:37.860973  257842 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:33:37.862911  257842 out.go:252]   - Generating certificates and keys ...
	I1119 22:33:37.863031  257842 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:33:37.863132  257842 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:33:37.987676  257842 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:33:38.117107  257842 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:33:38.304291  257842 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:33:38.419481  257842 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:33:38.673629  257842 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:33:38.673787  257842 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-409987 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:33:38.716286  257842 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:33:38.716448  257842 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-409987 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1119 22:33:38.841539  257842 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:33:37.253123  252325 node_ready.go:49] node "embed-certs-443380" is "Ready"
	I1119 22:33:37.253146  252325 node_ready.go:38] duration metric: took 11.503113839s for node "embed-certs-443380" to be "Ready" ...
	I1119 22:33:37.253158  252325 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:33:37.253193  252325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:33:37.264416  252325 api_server.go:72] duration metric: took 11.841983624s to wait for apiserver process to appear ...
	I1119 22:33:37.264435  252325 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:33:37.264448  252325 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:33:37.269949  252325 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1119 22:33:37.270720  252325 api_server.go:141] control plane version: v1.34.1
	I1119 22:33:37.270741  252325 api_server.go:131] duration metric: took 6.29992ms to wait for apiserver health ...
	I1119 22:33:37.270748  252325 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:33:37.273648  252325 system_pods.go:59] 8 kube-system pods found
	I1119 22:33:37.273681  252325 system_pods.go:61] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.273687  252325 system_pods.go:61] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.273692  252325 system_pods.go:61] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.273695  252325 system_pods.go:61] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.273699  252325 system_pods.go:61] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.273702  252325 system_pods.go:61] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.273705  252325 system_pods.go:61] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.273710  252325 system_pods.go:61] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.273719  252325 system_pods.go:74] duration metric: took 2.966347ms to wait for pod list to return data ...
	I1119 22:33:37.273726  252325 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:33:37.275697  252325 default_sa.go:45] found service account: "default"
	I1119 22:33:37.275714  252325 default_sa.go:55] duration metric: took 1.983922ms for default service account to be created ...
	I1119 22:33:37.275722  252325 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:33:37.278323  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.278347  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.278357  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.278362  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.278366  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.278370  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.278373  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.278376  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.278380  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.278397  252325 retry.go:31] will retry after 216.008228ms: missing components: kube-dns
	I1119 22:33:37.498308  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.498341  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.498349  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.498359  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.498366  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.498373  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.498379  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.498384  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.498396  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.498412  252325 retry.go:31] will retry after 271.433631ms: missing components: kube-dns
	I1119 22:33:37.773981  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:37.774011  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:33:37.774024  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:37.774029  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:37.774033  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:37.774037  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:37.774040  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:37.774043  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:37.774048  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:33:37.774061  252325 retry.go:31] will retry after 422.422645ms: missing components: kube-dns
	I1119 22:33:38.201323  252325 system_pods.go:86] 8 kube-system pods found
	I1119 22:33:38.201351  252325 system_pods.go:89] "coredns-66bc5c9577-jmjmf" [92ec3ba1-4706-48eb-bd5b-44ad8fc4175e] Running
	I1119 22:33:38.201358  252325 system_pods.go:89] "etcd-embed-certs-443380" [25d2ce20-7025-42ab-b91c-62b167988156] Running
	I1119 22:33:38.201364  252325 system_pods.go:89] "kindnet-gq4x5" [f3af49be-1079-4678-9b4f-9668bf940dbd] Running
	I1119 22:33:38.201370  252325 system_pods.go:89] "kube-apiserver-embed-certs-443380" [db8c222e-c85a-44b5-af14-76f756ce32c3] Running
	I1119 22:33:38.201377  252325 system_pods.go:89] "kube-controller-manager-embed-certs-443380" [a466d7f5-d0c9-4e2d-b61f-bf2426dd0519] Running
	I1119 22:33:38.201382  252325 system_pods.go:89] "kube-proxy-r5xtg" [f6a43862-cc3e-4385-92e7-94a60417b36c] Running
	I1119 22:33:38.201387  252325 system_pods.go:89] "kube-scheduler-embed-certs-443380" [024bcbe6-a13e-46e5-b2df-28085e08c73a] Running
	I1119 22:33:38.201392  252325 system_pods.go:89] "storage-provisioner" [abe5634c-fb84-4e79-b5cd-8a98efdc6417] Running
	I1119 22:33:38.201410  252325 system_pods.go:126] duration metric: took 925.672892ms to wait for k8s-apps to be running ...
	I1119 22:33:38.201420  252325 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:33:38.201470  252325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:33:38.215425  252325 system_svc.go:56] duration metric: took 13.999039ms WaitForService to wait for kubelet
	I1119 22:33:38.215452  252325 kubeadm.go:587] duration metric: took 12.793019797s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:33:38.215473  252325 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:33:38.218207  252325 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:33:38.218230  252325 node_conditions.go:123] node cpu capacity is 8
	I1119 22:33:38.218241  252325 node_conditions.go:105] duration metric: took 2.763018ms to run NodePressure ...
	I1119 22:33:38.218255  252325 start.go:242] waiting for startup goroutines ...
	I1119 22:33:38.218268  252325 start.go:247] waiting for cluster config update ...
	I1119 22:33:38.218285  252325 start.go:256] writing updated cluster config ...
	I1119 22:33:38.218604  252325 ssh_runner.go:195] Run: rm -f paused
	I1119 22:33:38.222676  252325 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:38.226257  252325 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jmjmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.230497  252325 pod_ready.go:94] pod "coredns-66bc5c9577-jmjmf" is "Ready"
	I1119 22:33:38.230536  252325 pod_ready.go:86] duration metric: took 4.244524ms for pod "coredns-66bc5c9577-jmjmf" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.232661  252325 pod_ready.go:83] waiting for pod "etcd-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.236397  252325 pod_ready.go:94] pod "etcd-embed-certs-443380" is "Ready"
	I1119 22:33:38.236416  252325 pod_ready.go:86] duration metric: took 3.737265ms for pod "etcd-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.238310  252325 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.242981  252325 pod_ready.go:94] pod "kube-apiserver-embed-certs-443380" is "Ready"
	I1119 22:33:38.242999  252325 pod_ready.go:86] duration metric: took 4.670826ms for pod "kube-apiserver-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.244923  252325 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.627463  252325 pod_ready.go:94] pod "kube-controller-manager-embed-certs-443380" is "Ready"
	I1119 22:33:38.627488  252325 pod_ready.go:86] duration metric: took 382.549793ms for pod "kube-controller-manager-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:38.827739  252325 pod_ready.go:83] waiting for pod "kube-proxy-r5xtg" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.226142  252325 pod_ready.go:94] pod "kube-proxy-r5xtg" is "Ready"
	I1119 22:33:39.226169  252325 pod_ready.go:86] duration metric: took 398.408001ms for pod "kube-proxy-r5xtg" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.427580  252325 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.827781  252325 pod_ready.go:94] pod "kube-scheduler-embed-certs-443380" is "Ready"
	I1119 22:33:39.827836  252325 pod_ready.go:86] duration metric: took 400.201717ms for pod "kube-scheduler-embed-certs-443380" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:33:39.827853  252325 pod_ready.go:40] duration metric: took 1.605146507s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:33:39.871483  252325 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:33:39.873526  252325 out.go:179] * Done! kubectl is now configured to use "embed-certs-443380" cluster and "default" namespace by default
	I1119 22:33:39.323985  257842 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:33:39.442549  257842 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:33:39.442737  257842 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:33:39.627688  257842 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:33:40.036493  257842 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:33:40.698146  257842 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:33:40.961731  257842 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:33:41.149359  257842 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:33:41.150288  257842 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:33:41.154317  257842 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:33:37.929115  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:37.929464  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:37.929520  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:37.929565  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:37.955356  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:37.955379  229026 cri.go:89] found id: ""
	I1119 22:33:37.955388  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:37.955438  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:37.959319  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:37.959393  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:37.984434  229026 cri.go:89] found id: ""
	I1119 22:33:37.984458  229026 logs.go:282] 0 containers: []
	W1119 22:33:37.984468  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:37.984475  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:37.984526  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:38.012164  229026 cri.go:89] found id: ""
	I1119 22:33:38.012190  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.012199  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:38.012204  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:38.012285  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:38.036173  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:38.036195  229026 cri.go:89] found id: ""
	I1119 22:33:38.036205  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:38.036257  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:38.039850  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:38.039898  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:38.064432  229026 cri.go:89] found id: ""
	I1119 22:33:38.064452  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.064461  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:38.064467  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:38.064514  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:38.090526  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:38.090548  229026 cri.go:89] found id: ""
	I1119 22:33:38.090557  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:38.090607  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:38.094245  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:38.094302  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:38.121462  229026 cri.go:89] found id: ""
	I1119 22:33:38.121481  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.121491  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:38.121498  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:38.121549  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:38.146752  229026 cri.go:89] found id: ""
	I1119 22:33:38.146772  229026 logs.go:282] 0 containers: []
	W1119 22:33:38.146778  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:38.146787  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:38.146796  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:38.196010  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:38.196033  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:38.223390  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:38.223411  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:38.270213  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:38.270241  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:38.299662  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:38.299691  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:38.386912  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:38.386944  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:38.400305  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:38.400339  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:38.455714  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:38.455731  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:38.455743  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:40.987565  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:40.987943  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:40.987996  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:40.988049  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:41.016569  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:41.016586  229026 cri.go:89] found id: ""
	I1119 22:33:41.016593  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:41.016633  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.020316  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:41.020366  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:41.046437  229026 cri.go:89] found id: ""
	I1119 22:33:41.046457  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.046463  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:41.046468  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:41.046529  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:41.072677  229026 cri.go:89] found id: ""
	I1119 22:33:41.072701  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.072711  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:41.072719  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:41.072769  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:41.099927  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:41.099949  229026 cri.go:89] found id: ""
	I1119 22:33:41.099959  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:41.100014  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.104773  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:41.104852  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:41.139008  229026 cri.go:89] found id: ""
	I1119 22:33:41.139034  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.139043  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:41.139051  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:41.139109  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:41.170661  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:41.170688  229026 cri.go:89] found id: ""
	I1119 22:33:41.170706  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:41.170763  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:41.174802  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:41.174872  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:41.209289  229026 cri.go:89] found id: ""
	I1119 22:33:41.209313  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.209323  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:41.209330  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:41.209383  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:41.248091  229026 cri.go:89] found id: ""
	I1119 22:33:41.248112  229026 logs.go:282] 0 containers: []
	W1119 22:33:41.248119  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:41.248128  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:41.248139  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:41.341775  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:41.341806  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:41.355629  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:41.355651  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:41.412102  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:41.412120  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:41.412132  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:41.440857  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:41.440882  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:41.488518  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:41.488550  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:41.514120  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:41.514142  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:41.156021  257842 out.go:252]   - Booting up control plane ...
	I1119 22:33:41.156147  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:33:41.156248  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:33:41.157952  257842 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:33:41.175709  257842 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:33:41.175884  257842 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:33:41.184676  257842 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:33:41.185076  257842 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:33:41.185168  257842 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:33:41.291554  257842 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:33:41.291689  257842 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:33:41.793221  257842 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.701541ms
	I1119 22:33:41.796084  257842 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:33:41.796211  257842 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1119 22:33:41.796352  257842 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:33:41.796490  257842 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:33:43.308442  257842 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.512357875s
	I1119 22:33:44.044905  257842 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.248841505s
	I1119 22:33:45.797611  257842 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001570497s
	I1119 22:33:45.808645  257842 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:33:45.819335  257842 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:33:45.830686  257842 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:33:45.830987  257842 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-409987 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:33:45.839041  257842 kubeadm.go:319] [bootstrap-token] Using token: o014qj.gxcv4zxy9pcntvf3
	I1119 22:33:41.559717  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:41.559743  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:44.099962  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:33:44.100339  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:33:44.100389  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:33:44.100433  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:33:44.128669  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:44.128702  229026 cri.go:89] found id: ""
	I1119 22:33:44.128712  229026 logs.go:282] 1 containers: [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:33:44.128764  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:44.132578  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:33:44.132631  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:33:44.158942  229026 cri.go:89] found id: ""
	I1119 22:33:44.158963  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.158972  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:33:44.158978  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:33:44.159035  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:33:44.185157  229026 cri.go:89] found id: ""
	I1119 22:33:44.185180  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.185187  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:33:44.185193  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:33:44.185237  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:33:44.225356  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:44.225380  229026 cri.go:89] found id: ""
	I1119 22:33:44.225389  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:33:44.225443  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:44.231443  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:33:44.231519  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:33:44.269502  229026 cri.go:89] found id: ""
	I1119 22:33:44.269629  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.269686  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:33:44.269703  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:33:44.269775  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:33:44.298843  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:44.298867  229026 cri.go:89] found id: ""
	I1119 22:33:44.298876  229026 logs.go:282] 1 containers: [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:33:44.298937  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:33:44.302692  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:33:44.302747  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:33:44.329142  229026 cri.go:89] found id: ""
	I1119 22:33:44.329168  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.329179  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:33:44.329186  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:33:44.329242  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:33:44.354082  229026 cri.go:89] found id: ""
	I1119 22:33:44.354108  229026 logs.go:282] 0 containers: []
	W1119 22:33:44.354118  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:33:44.354128  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:33:44.354144  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:33:44.384805  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:33:44.384860  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:33:44.481636  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:33:44.481663  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:33:44.498395  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:33:44.498441  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:33:44.558873  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:33:44.558897  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:33:44.558912  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:33:44.595563  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:33:44.595588  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:33:44.647710  229026 logs.go:123] Gathering logs for kube-controller-manager [22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12] ...
	I1119 22:33:44.647740  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:33:44.672338  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:33:44.672362  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:33:45.841288  257842 out.go:252]   - Configuring RBAC rules ...
	I1119 22:33:45.841450  257842 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:33:45.843810  257842 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:33:45.849081  257842 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:33:45.851404  257842 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:33:45.854320  257842 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:33:45.856519  257842 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:33:46.204287  257842 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:33:46.618484  257842 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:33:47.203871  257842 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:33:47.204979  257842 kubeadm.go:319] 
	I1119 22:33:47.205066  257842 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:33:47.205079  257842 kubeadm.go:319] 
	I1119 22:33:47.205184  257842 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:33:47.205194  257842 kubeadm.go:319] 
	I1119 22:33:47.205254  257842 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:33:47.205345  257842 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:33:47.205410  257842 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:33:47.205416  257842 kubeadm.go:319] 
	I1119 22:33:47.205482  257842 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:33:47.205490  257842 kubeadm.go:319] 
	I1119 22:33:47.205572  257842 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:33:47.205589  257842 kubeadm.go:319] 
	I1119 22:33:47.205659  257842 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:33:47.205774  257842 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:33:47.205910  257842 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:33:47.205923  257842 kubeadm.go:319] 
	I1119 22:33:47.206061  257842 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:33:47.206169  257842 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:33:47.206179  257842 kubeadm.go:319] 
	I1119 22:33:47.206289  257842 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token o014qj.gxcv4zxy9pcntvf3 \
	I1119 22:33:47.206431  257842 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b \
	I1119 22:33:47.206462  257842 kubeadm.go:319] 	--control-plane 
	I1119 22:33:47.206471  257842 kubeadm.go:319] 
	I1119 22:33:47.206590  257842 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:33:47.206596  257842 kubeadm.go:319] 
	I1119 22:33:47.206699  257842 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token o014qj.gxcv4zxy9pcntvf3 \
	I1119 22:33:47.206888  257842 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b 
	I1119 22:33:47.210023  257842 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:33:47.210185  257842 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:33:47.210203  257842 cni.go:84] Creating CNI manager for ""
	I1119 22:33:47.210212  257842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:33:47.213201  257842 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:33:47.214365  257842 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:33:47.219125  257842 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:33:47.219143  257842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:33:47.232491  257842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:33:47.497885  257842 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:33:47.497964  257842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:47.497980  257842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-409987 minikube.k8s.io/updated_at=2025_11_19T22_33_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=default-k8s-diff-port-409987 minikube.k8s.io/primary=true
	I1119 22:33:47.513184  257842 ops.go:34] apiserver oom_adj: -16
	I1119 22:33:47.589020  257842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:48.089677  257842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:33:48.589861  257842 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Nov 19 22:33:37 embed-certs-443380 crio[783]: time="2025-11-19T22:33:37.142789227Z" level=info msg="Starting container: e3fb67f3417ef015d368c76d98f36f92d77c508154924ffa56b77ca62f84be93" id=d46c5e73-1f64-4d5b-af2c-ecf965341d92 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:33:37 embed-certs-443380 crio[783]: time="2025-11-19T22:33:37.144605181Z" level=info msg="Started container" PID=1852 containerID=e3fb67f3417ef015d368c76d98f36f92d77c508154924ffa56b77ca62f84be93 description=kube-system/coredns-66bc5c9577-jmjmf/coredns id=d46c5e73-1f64-4d5b-af2c-ecf965341d92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0412f6469519be4bb719c7c24e06fa07b2334f6ff8de2d7b6a4a649150de631f
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.339984793Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4ee74025-95e0-4b6e-94a4-6b7967cfeb5a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.340061572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.344949248Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a73ff43cff7870c96f73bfaf2117cf3c98c2955a5614929850567182ed5786de UID:6ec43358-1e3e-4de9-acb0-6df760321c64 NetNS:/var/run/netns/f4862954-c09a-4e6d-8e42-28732199dff8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006244e0}] Aliases:map[]}"
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.344974311Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.360045143Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a73ff43cff7870c96f73bfaf2117cf3c98c2955a5614929850567182ed5786de UID:6ec43358-1e3e-4de9-acb0-6df760321c64 NetNS:/var/run/netns/f4862954-c09a-4e6d-8e42-28732199dff8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006244e0}] Aliases:map[]}"
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.360214683Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.360876703Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.361597196Z" level=info msg="Ran pod sandbox a73ff43cff7870c96f73bfaf2117cf3c98c2955a5614929850567182ed5786de with infra container: default/busybox/POD" id=4ee74025-95e0-4b6e-94a4-6b7967cfeb5a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.362670105Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f404389f-4ae5-4bc8-97eb-3a783d439af2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.362766395Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f404389f-4ae5-4bc8-97eb-3a783d439af2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.362799521Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f404389f-4ae5-4bc8-97eb-3a783d439af2 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.363598722Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1b18a5d9-07a5-45a4-9a49-0464c7ac5323 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:33:40 embed-certs-443380 crio[783]: time="2025-11-19T22:33:40.368574619Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.251215545Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1b18a5d9-07a5-45a4-9a49-0464c7ac5323 name=/runtime.v1.ImageService/PullImage
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.251934277Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1efc38d3-aabc-441d-856f-f26f872131ec name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.253273405Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=46efa1cc-6d53-4f29-9844-3b934c2fbc02 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.256429757Z" level=info msg="Creating container: default/busybox/busybox" id=7a8e4e4f-f1ed-4299-8012-b9121946d83f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.256772373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.261247151Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.261709887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.286257669Z" level=info msg="Created container 927075ca3f8e891fd6f5dd61aca62794174cd8cbd167f1203bd9f10c8de7c87a: default/busybox/busybox" id=7a8e4e4f-f1ed-4299-8012-b9121946d83f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.286904927Z" level=info msg="Starting container: 927075ca3f8e891fd6f5dd61aca62794174cd8cbd167f1203bd9f10c8de7c87a" id=1d0469dc-60fd-4e66-b866-7dd1c11fbb08 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:33:41 embed-certs-443380 crio[783]: time="2025-11-19T22:33:41.288739235Z" level=info msg="Started container" PID=1934 containerID=927075ca3f8e891fd6f5dd61aca62794174cd8cbd167f1203bd9f10c8de7c87a description=default/busybox/busybox id=1d0469dc-60fd-4e66-b866-7dd1c11fbb08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a73ff43cff7870c96f73bfaf2117cf3c98c2955a5614929850567182ed5786de
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	927075ca3f8e8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   a73ff43cff787       busybox                                      default
	e3fb67f3417ef       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   0412f6469519b       coredns-66bc5c9577-jmjmf                     kube-system
	10e3ff85c015d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   6959cd8341318       storage-provisioner                          kube-system
	dbd845d00016a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   c1ecf0e63f465       kindnet-gq4x5                                kube-system
	aa80644a0ddea       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   01b3e1cb099b3       kube-proxy-r5xtg                             kube-system
	1e609969c7c3b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   90842108e534a       kube-apiserver-embed-certs-443380            kube-system
	9049a55ed8344       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   f07ab70fe6aea       kube-controller-manager-embed-certs-443380   kube-system
	d69117a85e746       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   896d243da22e0       etcd-embed-certs-443380                      kube-system
	2509325f6eb71       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   db396bc79d9c7       kube-scheduler-embed-certs-443380            kube-system
	
	
	==> coredns [e3fb67f3417ef015d368c76d98f36f92d77c508154924ffa56b77ca62f84be93] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39464 - 17831 "HINFO IN 1774043413253535659.1043186783513605088. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.496935479s
	
	
	==> describe nodes <==
	Name:               embed-certs-443380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-443380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-443380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_33_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:33:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-443380
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:33:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:33:36 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:33:36 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:33:36 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:33:36 +0000   Wed, 19 Nov 2025 22:33:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-443380
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                e1eb2e2e-5c81-4978-ae2f-b498e52a3d43
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-jmjmf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-443380                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-gq4x5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-443380             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-443380    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-r5xtg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-443380             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node embed-certs-443380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node embed-certs-443380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node embed-certs-443380 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-443380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-443380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-443380 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-443380 event: Registered Node embed-certs-443380 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-443380 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [d69117a85e746e67f28a243b54da2a4d7953195f057184914031f6aa1464e769] <==
	{"level":"warn","ts":"2025-11-19T22:33:16.935886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:16.942518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:16.950224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:16.958046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:16.964691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:16.972355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:16.980963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:16.987738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:16.994958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.001998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.009051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.014907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.021510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.027471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.033599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.047458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.050303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.062000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.065167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.070878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.076787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:17.122983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:27.843226Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.486559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-19T22:33:27.843327Z","caller":"traceutil/trace.go:172","msg":"trace[303572870] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:418; }","duration":"164.649594ms","start":"2025-11-19T22:33:27.678661Z","end":"2025-11-19T22:33:27.843311Z","steps":["trace[303572870] 'range keys from in-memory index tree'  (duration: 164.426082ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:27.982974Z","caller":"traceutil/trace.go:172","msg":"trace[2016072477] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"133.932829ms","start":"2025-11-19T22:33:27.849018Z","end":"2025-11-19T22:33:27.982951Z","steps":["trace[2016072477] 'process raft request'  (duration: 133.779129ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:33:49 up  1:16,  0 user,  load average: 2.25, 2.64, 1.81
	Linux embed-certs-443380 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dbd845d00016a1a084df71db46946d9f34ca79361043da42fa35b48283344279] <==
	I1119 22:33:26.450487       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:33:26.450789       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:33:26.544261       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:33:26.544289       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:33:26.544314       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:33:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:33:26.653147       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:33:26.653174       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:33:26.653186       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:33:26.744291       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:33:27.044199       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:33:27.044232       1 metrics.go:72] Registering metrics
	I1119 22:33:27.044365       1 controller.go:711] "Syncing nftables rules"
	I1119 22:33:36.654276       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:33:36.654319       1 main.go:301] handling current node
	I1119 22:33:46.656931       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:33:46.656968       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1e609969c7c3b5ffcfb3478647d40e6e8d77a22e85f4d90a6f0d949e1ca32c09] <==
	E1119 22:33:17.682666       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1119 22:33:17.730250       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:33:17.735250       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:33:17.735343       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:33:17.742114       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:33:17.742362       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:33:17.829143       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:33:18.532893       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:33:18.538169       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:33:18.538188       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:33:19.001912       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:33:19.036453       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:33:19.135657       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:33:19.143176       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1119 22:33:19.144804       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:33:19.148948       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:33:19.563085       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:33:19.915318       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:33:19.925105       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:33:19.932335       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:33:25.307622       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:33:25.566424       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:33:25.613538       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:33:25.621525       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1119 22:33:48.131997       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:52470: use of closed network connection
	
	
	==> kube-controller-manager [9049a55ed8344b9039a042e41bf730114fbd0f07c18898a2afbe759567a44e27] <==
	I1119 22:33:24.517224       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:33:24.519383       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:33:24.525341       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:33:24.526977       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-443380" podCIDRs=["10.244.0.0/24"]
	I1119 22:33:24.533131       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:33:24.540389       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:33:24.553855       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:33:24.554840       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:33:24.554856       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:33:24.554868       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:33:24.554868       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:33:24.554903       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:33:24.554976       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:33:24.554985       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:33:24.556064       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:33:24.556082       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:33:24.556099       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:33:24.556192       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:33:24.556240       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:33:24.556873       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:33:24.557323       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:33:24.564563       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:33:24.566714       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:33:24.581016       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:33:39.507710       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [aa80644a0ddea6195d46b16756ba1b70f7d242ef8de92d0385720e1911db0064] <==
	I1119 22:33:26.324628       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:33:26.408508       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:33:26.509479       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:33:26.509517       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:33:26.509620       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:33:26.527462       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:33:26.527508       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:33:26.532724       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:33:26.533197       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:33:26.533231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:33:26.535666       1 config.go:200] "Starting service config controller"
	I1119 22:33:26.535708       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:33:26.535730       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:33:26.535736       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:33:26.535779       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:33:26.535785       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:33:26.535960       1 config.go:309] "Starting node config controller"
	I1119 22:33:26.535976       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:33:26.635877       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:33:26.635900       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:33:26.635903       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:33:26.636117       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [2509325f6eb71e3ef9ddd4c16493e4de75e2801ce86cde2281d7175a372c69b2] <==
	E1119 22:33:17.588854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:33:17.588854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:33:17.588934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:33:17.588996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:33:17.589090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:33:17.589153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:33:17.589676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:33:17.589701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:33:17.589673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:33:17.589761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:33:17.589956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:33:17.590045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:33:17.590210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:33:18.411663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:33:18.421942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:33:18.469180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:33:18.479366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:33:18.584605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:33:18.691205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:33:18.714806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:33:18.750197       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 22:33:18.803759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:33:18.817707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:33:18.824860       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1119 22:33:21.083781       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: I1119 22:33:25.412357    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3af49be-1079-4678-9b4f-9668bf940dbd-xtables-lock\") pod \"kindnet-gq4x5\" (UID: \"f3af49be-1079-4678-9b4f-9668bf940dbd\") " pod="kube-system/kindnet-gq4x5"
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: I1119 22:33:25.412422    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3af49be-1079-4678-9b4f-9668bf940dbd-lib-modules\") pod \"kindnet-gq4x5\" (UID: \"f3af49be-1079-4678-9b4f-9668bf940dbd\") " pod="kube-system/kindnet-gq4x5"
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: I1119 22:33:25.412446    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f6a43862-cc3e-4385-92e7-94a60417b36c-kube-proxy\") pod \"kube-proxy-r5xtg\" (UID: \"f6a43862-cc3e-4385-92e7-94a60417b36c\") " pod="kube-system/kube-proxy-r5xtg"
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: I1119 22:33:25.412526    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6a43862-cc3e-4385-92e7-94a60417b36c-lib-modules\") pod \"kube-proxy-r5xtg\" (UID: \"f6a43862-cc3e-4385-92e7-94a60417b36c\") " pod="kube-system/kube-proxy-r5xtg"
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: I1119 22:33:25.412579    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78kbj\" (UniqueName: \"kubernetes.io/projected/f6a43862-cc3e-4385-92e7-94a60417b36c-kube-api-access-78kbj\") pod \"kube-proxy-r5xtg\" (UID: \"f6a43862-cc3e-4385-92e7-94a60417b36c\") " pod="kube-system/kube-proxy-r5xtg"
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: I1119 22:33:25.412609    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f3af49be-1079-4678-9b4f-9668bf940dbd-cni-cfg\") pod \"kindnet-gq4x5\" (UID: \"f3af49be-1079-4678-9b4f-9668bf940dbd\") " pod="kube-system/kindnet-gq4x5"
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: I1119 22:33:25.412637    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6597c\" (UniqueName: \"kubernetes.io/projected/f3af49be-1079-4678-9b4f-9668bf940dbd-kube-api-access-6597c\") pod \"kindnet-gq4x5\" (UID: \"f3af49be-1079-4678-9b4f-9668bf940dbd\") " pod="kube-system/kindnet-gq4x5"
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: I1119 22:33:25.412657    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6a43862-cc3e-4385-92e7-94a60417b36c-xtables-lock\") pod \"kube-proxy-r5xtg\" (UID: \"f6a43862-cc3e-4385-92e7-94a60417b36c\") " pod="kube-system/kube-proxy-r5xtg"
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: E1119 22:33:25.529190    1322 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: E1119 22:33:25.529230    1322 projected.go:196] Error preparing data for projected volume kube-api-access-78kbj for pod kube-system/kube-proxy-r5xtg: configmap "kube-root-ca.crt" not found
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: E1119 22:33:25.529187    1322 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: E1119 22:33:25.529317    1322 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6a43862-cc3e-4385-92e7-94a60417b36c-kube-api-access-78kbj podName:f6a43862-cc3e-4385-92e7-94a60417b36c nodeName:}" failed. No retries permitted until 2025-11-19 22:33:26.029287707 +0000 UTC m=+6.339992834 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-78kbj" (UniqueName: "kubernetes.io/projected/f6a43862-cc3e-4385-92e7-94a60417b36c-kube-api-access-78kbj") pod "kube-proxy-r5xtg" (UID: "f6a43862-cc3e-4385-92e7-94a60417b36c") : configmap "kube-root-ca.crt" not found
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: E1119 22:33:25.529321    1322 projected.go:196] Error preparing data for projected volume kube-api-access-6597c for pod kube-system/kindnet-gq4x5: configmap "kube-root-ca.crt" not found
	Nov 19 22:33:25 embed-certs-443380 kubelet[1322]: E1119 22:33:25.529393    1322 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f3af49be-1079-4678-9b4f-9668bf940dbd-kube-api-access-6597c podName:f3af49be-1079-4678-9b4f-9668bf940dbd nodeName:}" failed. No retries permitted until 2025-11-19 22:33:26.02937249 +0000 UTC m=+6.340077611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6597c" (UniqueName: "kubernetes.io/projected/f3af49be-1079-4678-9b4f-9668bf940dbd-kube-api-access-6597c") pod "kindnet-gq4x5" (UID: "f3af49be-1079-4678-9b4f-9668bf940dbd") : configmap "kube-root-ca.crt" not found
	Nov 19 22:33:26 embed-certs-443380 kubelet[1322]: I1119 22:33:26.839505    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r5xtg" podStartSLOduration=1.8394836030000001 podStartE2EDuration="1.839483603s" podCreationTimestamp="2025-11-19 22:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:33:26.839208815 +0000 UTC m=+7.149913943" watchObservedRunningTime="2025-11-19 22:33:26.839483603 +0000 UTC m=+7.150188737"
	Nov 19 22:33:27 embed-certs-443380 kubelet[1322]: I1119 22:33:27.581556    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gq4x5" podStartSLOduration=2.581535128 podStartE2EDuration="2.581535128s" podCreationTimestamp="2025-11-19 22:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:33:26.850520618 +0000 UTC m=+7.161225747" watchObservedRunningTime="2025-11-19 22:33:27.581535128 +0000 UTC m=+7.892240258"
	Nov 19 22:33:36 embed-certs-443380 kubelet[1322]: I1119 22:33:36.762879    1322 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:33:36 embed-certs-443380 kubelet[1322]: I1119 22:33:36.891118    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/abe5634c-fb84-4e79-b5cd-8a98efdc6417-tmp\") pod \"storage-provisioner\" (UID: \"abe5634c-fb84-4e79-b5cd-8a98efdc6417\") " pod="kube-system/storage-provisioner"
	Nov 19 22:33:36 embed-certs-443380 kubelet[1322]: I1119 22:33:36.891166    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92ec3ba1-4706-48eb-bd5b-44ad8fc4175e-config-volume\") pod \"coredns-66bc5c9577-jmjmf\" (UID: \"92ec3ba1-4706-48eb-bd5b-44ad8fc4175e\") " pod="kube-system/coredns-66bc5c9577-jmjmf"
	Nov 19 22:33:36 embed-certs-443380 kubelet[1322]: I1119 22:33:36.891188    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5bh8\" (UniqueName: \"kubernetes.io/projected/abe5634c-fb84-4e79-b5cd-8a98efdc6417-kube-api-access-z5bh8\") pod \"storage-provisioner\" (UID: \"abe5634c-fb84-4e79-b5cd-8a98efdc6417\") " pod="kube-system/storage-provisioner"
	Nov 19 22:33:36 embed-certs-443380 kubelet[1322]: I1119 22:33:36.891212    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqx7\" (UniqueName: \"kubernetes.io/projected/92ec3ba1-4706-48eb-bd5b-44ad8fc4175e-kube-api-access-qbqx7\") pod \"coredns-66bc5c9577-jmjmf\" (UID: \"92ec3ba1-4706-48eb-bd5b-44ad8fc4175e\") " pod="kube-system/coredns-66bc5c9577-jmjmf"
	Nov 19 22:33:37 embed-certs-443380 kubelet[1322]: I1119 22:33:37.861625    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jmjmf" podStartSLOduration=12.861601017 podStartE2EDuration="12.861601017s" podCreationTimestamp="2025-11-19 22:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:33:37.86089748 +0000 UTC m=+18.171602609" watchObservedRunningTime="2025-11-19 22:33:37.861601017 +0000 UTC m=+18.172306146"
	Nov 19 22:33:37 embed-certs-443380 kubelet[1322]: I1119 22:33:37.888132    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.888112902 podStartE2EDuration="12.888112902s" podCreationTimestamp="2025-11-19 22:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:33:37.888034326 +0000 UTC m=+18.198739453" watchObservedRunningTime="2025-11-19 22:33:37.888112902 +0000 UTC m=+18.198818031"
	Nov 19 22:33:40 embed-certs-443380 kubelet[1322]: I1119 22:33:40.109871    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mv6w\" (UniqueName: \"kubernetes.io/projected/6ec43358-1e3e-4de9-acb0-6df760321c64-kube-api-access-8mv6w\") pod \"busybox\" (UID: \"6ec43358-1e3e-4de9-acb0-6df760321c64\") " pod="default/busybox"
	Nov 19 22:33:41 embed-certs-443380 kubelet[1322]: I1119 22:33:41.867519    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.977942808 podStartE2EDuration="1.867501736s" podCreationTimestamp="2025-11-19 22:33:40 +0000 UTC" firstStartedPulling="2025-11-19 22:33:40.363144375 +0000 UTC m=+20.673849483" lastFinishedPulling="2025-11-19 22:33:41.252703303 +0000 UTC m=+21.563408411" observedRunningTime="2025-11-19 22:33:41.867247337 +0000 UTC m=+22.177952482" watchObservedRunningTime="2025-11-19 22:33:41.867501736 +0000 UTC m=+22.178206844"
	
	
	==> storage-provisioner [10e3ff85c015d589487ba59e5dd1eee5242d5061d202815a634180f861d7fea1] <==
	I1119 22:33:37.146828       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:33:37.155467       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:33:37.155521       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:33:37.157403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:37.162138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:33:37.162305       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:33:37.162450       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-443380_44d8a221-98b3-4870-aac2-143e220ca497!
	I1119 22:33:37.162448       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"824b733c-0cb0-473e-abb7-ba15ddd82973", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-443380_44d8a221-98b3-4870-aac2-143e220ca497 became leader
	W1119 22:33:37.164560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:37.168882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:33:37.262868       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-443380_44d8a221-98b3-4870-aac2-143e220ca497!
	W1119 22:33:39.171487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:39.175635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:41.179727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:41.184392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:43.188015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:43.197289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:45.201012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:45.205681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:47.209978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:47.214810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:49.217877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:33:49.221808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-443380 -n embed-certs-443380
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-443380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.129718ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-949690
helpers_test.go:243: (dbg) docker inspect newest-cni-949690:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0",
	        "Created": "2025-11-19T22:33:56.785605734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 266756,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:33:56.818638871Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/hostname",
	        "HostsPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/hosts",
	        "LogPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0-json.log",
	        "Name": "/newest-cni-949690",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-949690:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-949690",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0",
	                "LowerDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-949690",
	                "Source": "/var/lib/docker/volumes/newest-cni-949690/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-949690",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-949690",
	                "name.minikube.sigs.k8s.io": "newest-cni-949690",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ef328efcc34552c058efcf0ccf13cee5bbc9a611b27c465b7e75a54f2d6da3d9",
	            "SandboxKey": "/var/run/docker/netns/ef328efcc345",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-949690": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f9b0cf1aef5acfa5bdc194747c88b940e5b4d3be9960af2a5c8a6c56975f9e3f",
	                    "EndpointID": "7f547619d4e1935b198a38d6561a2fc4b9e1856a909560f94e4ba57b69435e00",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "be:2f:fd:33:47:e8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-949690",
	                        "00eedca978ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-949690 -n newest-cni-949690
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-949690 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-680619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ addons  │ enable metrics-server -p no-preload-178067 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │                     │
	│ stop    │ -p no-preload-178067 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ addons  │ enable dashboard -p no-preload-178067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ start   │ -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p disable-driver-mounts-726490                                                                                                                                                                                                               │ disable-driver-mounts-726490 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ stop    │ -p embed-certs-443380 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-443380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:34:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:34:07.856187  269329 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:34:07.856458  269329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:07.856467  269329 out.go:374] Setting ErrFile to fd 2...
	I1119 22:34:07.856473  269329 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:07.856709  269329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:34:07.857186  269329 out.go:368] Setting JSON to false
	I1119 22:34:07.858320  269329 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4596,"bootTime":1763587052,"procs":302,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:34:07.858397  269329 start.go:143] virtualization: kvm guest
	I1119 22:34:07.860432  269329 out.go:179] * [embed-certs-443380] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:34:07.861909  269329 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:34:07.861909  269329 notify.go:221] Checking for updates...
	I1119 22:34:07.864143  269329 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:34:07.865259  269329 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:07.866271  269329 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:34:07.870998  269329 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:34:07.872104  269329 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:34:07.873600  269329 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:07.874303  269329 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:34:07.901386  269329 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:34:07.901502  269329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:07.966012  269329 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:34:07.953249146 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:07.966162  269329 docker.go:319] overlay module found
	I1119 22:34:07.967698  269329 out.go:179] * Using the docker driver based on existing profile
	I1119 22:34:07.968831  269329 start.go:309] selected driver: docker
	I1119 22:34:07.968853  269329 start.go:930] validating driver "docker" against &{Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:07.968967  269329 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:34:07.969732  269329 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:08.040795  269329 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:34:08.029519802 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:08.041183  269329 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:34:08.041225  269329 cni.go:84] Creating CNI manager for ""
	I1119 22:34:08.041288  269329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:08.041332  269329 start.go:353] cluster config:
	{Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:08.043894  269329 out.go:179] * Starting "embed-certs-443380" primary control-plane node in "embed-certs-443380" cluster
	I1119 22:34:08.045932  269329 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:34:08.047303  269329 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:34:08.048556  269329 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:34:08.048591  269329 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:34:08.048600  269329 cache.go:65] Caching tarball of preloaded images
	I1119 22:34:08.048613  269329 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:34:08.048807  269329 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:34:08.048846  269329 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:34:08.048965  269329 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/config.json ...
	I1119 22:34:08.076330  269329 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:34:08.076348  269329 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:34:08.076364  269329 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:34:08.076389  269329 start.go:360] acquireMachinesLock for embed-certs-443380: {Name:mk45876245c2cf21fce38118b7c82861612c5d41 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:34:08.076485  269329 start.go:364] duration metric: took 72.729µs to acquireMachinesLock for "embed-certs-443380"
	I1119 22:34:08.076510  269329 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:34:08.076520  269329 fix.go:54] fixHost starting: 
	I1119 22:34:08.076777  269329 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:34:08.096201  269329 fix.go:112] recreateIfNeeded on embed-certs-443380: state=Stopped err=<nil>
	W1119 22:34:08.096234  269329 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 22:34:06.006591  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	W1119 22:34:08.010651  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	I1119 22:34:09.020020  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:11.860292  265374 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1119 22:34:11.860358  265374 kubeadm.go:319] [preflight] Running pre-flight checks
	I1119 22:34:11.860514  265374 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1119 22:34:11.860605  265374 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1119 22:34:11.860637  265374 kubeadm.go:319] OS: Linux
	I1119 22:34:11.860681  265374 kubeadm.go:319] CGROUPS_CPU: enabled
	I1119 22:34:11.860720  265374 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1119 22:34:11.860798  265374 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1119 22:34:11.860896  265374 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1119 22:34:11.860970  265374 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1119 22:34:11.861048  265374 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1119 22:34:11.861117  265374 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1119 22:34:11.861189  265374 kubeadm.go:319] CGROUPS_IO: enabled
	I1119 22:34:11.861286  265374 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1119 22:34:11.861425  265374 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1119 22:34:11.861568  265374 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1119 22:34:11.861633  265374 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1119 22:34:11.862955  265374 out.go:252]   - Generating certificates and keys ...
	I1119 22:34:11.863022  265374 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1119 22:34:11.863092  265374 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1119 22:34:11.863182  265374 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:34:11.863295  265374 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:34:11.863381  265374 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:34:11.863475  265374 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:34:11.863548  265374 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:34:11.863723  265374 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-949690] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:34:11.863804  265374 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:34:11.863957  265374 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-949690] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:34:11.864048  265374 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:34:11.864137  265374 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:34:11.864205  265374 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:34:11.864289  265374 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:34:11.864372  265374 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:34:11.864431  265374 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:34:11.864477  265374 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:34:11.864553  265374 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:34:11.864610  265374 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:34:11.864701  265374 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:34:11.864782  265374 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:34:11.865942  265374 out.go:252]   - Booting up control plane ...
	I1119 22:34:11.866016  265374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:34:11.866094  265374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:34:11.866155  265374 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:34:11.866251  265374 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:34:11.866367  265374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:34:11.866514  265374 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:34:11.866595  265374 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:34:11.866632  265374 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:34:11.866755  265374 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:34:11.866888  265374 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:34:11.866946  265374 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001495866s
	I1119 22:34:11.867030  265374 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:34:11.867134  265374 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 22:34:11.867281  265374 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:34:11.867386  265374 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:34:11.867459  265374 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.226791935s
	I1119 22:34:11.867533  265374 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.013126684s
	I1119 22:34:11.867600  265374 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501648474s
	I1119 22:34:11.867699  265374 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:34:11.867845  265374 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:34:11.867899  265374 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:34:11.868051  265374 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-949690 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:34:11.868100  265374 kubeadm.go:319] [bootstrap-token] Using token: 56xjsb.8p412dqibmxh3mus
	I1119 22:34:11.869854  265374 out.go:252]   - Configuring RBAC rules ...
	I1119 22:34:11.869951  265374 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:34:11.870039  265374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:34:11.870163  265374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:34:11.870276  265374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:34:11.870394  265374 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:34:11.870468  265374 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:34:11.870574  265374 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:34:11.870611  265374 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:34:11.870650  265374 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:34:11.870656  265374 kubeadm.go:319] 
	I1119 22:34:11.870705  265374 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:34:11.870711  265374 kubeadm.go:319] 
	I1119 22:34:11.870774  265374 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:34:11.870783  265374 kubeadm.go:319] 
	I1119 22:34:11.870828  265374 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:34:11.870881  265374 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:34:11.870925  265374 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:34:11.870930  265374 kubeadm.go:319] 
	I1119 22:34:11.870986  265374 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:34:11.870993  265374 kubeadm.go:319] 
	I1119 22:34:11.871030  265374 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:34:11.871035  265374 kubeadm.go:319] 
	I1119 22:34:11.871088  265374 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:34:11.871155  265374 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:34:11.871224  265374 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:34:11.871237  265374 kubeadm.go:319] 
	I1119 22:34:11.871316  265374 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:34:11.871387  265374 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:34:11.871398  265374 kubeadm.go:319] 
	I1119 22:34:11.871515  265374 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 56xjsb.8p412dqibmxh3mus \
	I1119 22:34:11.871612  265374 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b \
	I1119 22:34:11.871632  265374 kubeadm.go:319] 	--control-plane 
	I1119 22:34:11.871637  265374 kubeadm.go:319] 
	I1119 22:34:11.871707  265374 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:34:11.871713  265374 kubeadm.go:319] 
	I1119 22:34:11.871783  265374 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 56xjsb.8p412dqibmxh3mus \
	I1119 22:34:11.871894  265374 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b 
	I1119 22:34:11.871903  265374 cni.go:84] Creating CNI manager for ""
	I1119 22:34:11.871909  265374 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:11.873172  265374 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:34:08.098060  269329 out.go:252] * Restarting existing docker container for "embed-certs-443380" ...
	I1119 22:34:08.098131  269329 cli_runner.go:164] Run: docker start embed-certs-443380
	I1119 22:34:08.416476  269329 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:34:08.437974  269329 kic.go:430] container "embed-certs-443380" state is running.
	I1119 22:34:08.438322  269329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:34:08.460858  269329 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/config.json ...
	I1119 22:34:08.461123  269329 machine.go:94] provisionDockerMachine start ...
	I1119 22:34:08.461211  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:08.485164  269329 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:08.485425  269329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:34:08.485439  269329 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:34:08.486104  269329 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39674->127.0.0.1:33088: read: connection reset by peer
	I1119 22:34:11.611261  269329 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-443380
	
	I1119 22:34:11.611286  269329 ubuntu.go:182] provisioning hostname "embed-certs-443380"
	I1119 22:34:11.611334  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:11.629793  269329 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:11.630009  269329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:34:11.630032  269329 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-443380 && echo "embed-certs-443380" | sudo tee /etc/hostname
	I1119 22:34:11.763759  269329 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-443380
	
	I1119 22:34:11.763854  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:11.783654  269329 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:11.783875  269329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:34:11.783897  269329 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-443380' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-443380/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-443380' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:34:11.909848  269329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:34:11.909880  269329 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:34:11.909902  269329 ubuntu.go:190] setting up certificates
	I1119 22:34:11.909911  269329 provision.go:84] configureAuth start
	I1119 22:34:11.909976  269329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:34:11.930208  269329 provision.go:143] copyHostCerts
	I1119 22:34:11.930267  269329 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:34:11.930281  269329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:34:11.930340  269329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:34:11.930449  269329 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:34:11.930460  269329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:34:11.930491  269329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:34:11.930563  269329 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:34:11.930574  269329 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:34:11.930607  269329 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:34:11.930680  269329 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.embed-certs-443380 san=[127.0.0.1 192.168.85.2 embed-certs-443380 localhost minikube]
	I1119 22:34:12.835828  269329 provision.go:177] copyRemoteCerts
	I1119 22:34:12.835910  269329 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:34:12.835955  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:12.853318  269329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	W1119 22:34:10.506661  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	W1119 22:34:13.006511  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	I1119 22:34:12.944575  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:34:12.962225  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1119 22:34:12.979578  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:34:12.995849  269329 provision.go:87] duration metric: took 1.085926541s to configureAuth
	I1119 22:34:12.995871  269329 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:34:12.996040  269329 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:12.996141  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:13.014353  269329 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:13.014549  269329 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I1119 22:34:13.014570  269329 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:34:13.332577  269329 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:34:13.332607  269329 machine.go:97] duration metric: took 4.87146609s to provisionDockerMachine
	I1119 22:34:13.332622  269329 start.go:293] postStartSetup for "embed-certs-443380" (driver="docker")
	I1119 22:34:13.332637  269329 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:34:13.332720  269329 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:34:13.332809  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:13.352157  269329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:34:13.443899  269329 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:34:13.447234  269329 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:34:13.447257  269329 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:34:13.447268  269329 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:34:13.447320  269329 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:34:13.447417  269329 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:34:13.447546  269329 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:34:13.454718  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:34:13.471717  269329 start.go:296] duration metric: took 139.082369ms for postStartSetup
	I1119 22:34:13.471785  269329 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:34:13.471870  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:13.491217  269329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:34:13.579739  269329 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:34:13.584313  269329 fix.go:56] duration metric: took 5.507788539s for fixHost
	I1119 22:34:13.584339  269329 start.go:83] releasing machines lock for "embed-certs-443380", held for 5.50783945s
	I1119 22:34:13.584403  269329 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-443380
	I1119 22:34:13.603482  269329 ssh_runner.go:195] Run: cat /version.json
	I1119 22:34:13.603524  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:13.603570  269329 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:34:13.603631  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:13.620894  269329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:34:13.622300  269329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:34:13.772575  269329 ssh_runner.go:195] Run: systemctl --version
	I1119 22:34:13.779462  269329 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:34:13.814156  269329 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:34:13.818710  269329 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:34:13.818766  269329 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:34:13.826913  269329 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:34:13.826931  269329 start.go:496] detecting cgroup driver to use...
	I1119 22:34:13.826961  269329 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:34:13.827004  269329 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:34:13.840216  269329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:34:13.851779  269329 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:34:13.851847  269329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:34:13.865073  269329 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:34:13.876921  269329 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:34:13.956867  269329 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:34:14.044830  269329 docker.go:234] disabling docker service ...
	I1119 22:34:14.044892  269329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:34:14.060342  269329 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:34:14.073025  269329 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:34:14.163746  269329 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:34:14.264732  269329 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:34:14.279573  269329 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:34:14.294881  269329 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:34:14.294943  269329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:14.305097  269329 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:34:14.305155  269329 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:14.313700  269329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:14.322344  269329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:14.332322  269329 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:34:14.340200  269329 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:14.348362  269329 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:14.357010  269329 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:14.366204  269329 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:34:14.373112  269329 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:34:14.380344  269329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:14.459842  269329 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:34:14.593213  269329 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:34:14.593282  269329 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:34:14.597022  269329 start.go:564] Will wait 60s for crictl version
	I1119 22:34:14.597079  269329 ssh_runner.go:195] Run: which crictl
	I1119 22:34:14.600786  269329 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:34:14.626837  269329 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:34:14.626920  269329 ssh_runner.go:195] Run: crio --version
	I1119 22:34:14.653173  269329 ssh_runner.go:195] Run: crio --version
	I1119 22:34:14.682115  269329 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:34:14.683204  269329 cli_runner.go:164] Run: docker network inspect embed-certs-443380 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:34:14.701170  269329 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1119 22:34:14.705001  269329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:34:14.715086  269329 kubeadm.go:884] updating cluster {Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:34:14.715196  269329 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:34:14.715240  269329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:34:14.747354  269329 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:34:14.747372  269329 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:34:14.747408  269329 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:34:14.773749  269329 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:34:14.773768  269329 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:34:14.773776  269329 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1119 22:34:14.773895  269329 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-443380 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:34:14.773994  269329 ssh_runner.go:195] Run: crio config
	I1119 22:34:14.818027  269329 cni.go:84] Creating CNI manager for ""
	I1119 22:34:14.818060  269329 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:14.818078  269329 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:34:14.818104  269329 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-443380 NodeName:embed-certs-443380 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:34:14.818271  269329 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-443380"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:34:14.818351  269329 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:34:14.826631  269329 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:34:14.826690  269329 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:34:14.834016  269329 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 22:34:14.845916  269329 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:34:14.857296  269329 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:34:14.869550  269329 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:34:14.872859  269329 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:34:14.882483  269329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:14.960010  269329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:14.985013  269329 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380 for IP: 192.168.85.2
	I1119 22:34:14.985033  269329 certs.go:195] generating shared ca certs ...
	I1119 22:34:14.985053  269329 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:14.985197  269329 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:34:14.985254  269329 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:34:14.985268  269329 certs.go:257] generating profile certs ...
	I1119 22:34:14.985383  269329 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/client.key
	I1119 22:34:14.985443  269329 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key.8b1e4b78
	I1119 22:34:14.985497  269329 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key
	I1119 22:34:14.985639  269329 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:34:14.985677  269329 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:34:14.985691  269329 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:34:14.985723  269329 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:34:14.985749  269329 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:34:14.985786  269329 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:34:14.985853  269329 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:34:14.986423  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:34:15.005203  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:34:15.023712  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:34:15.043005  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:34:15.063314  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1119 22:34:15.083503  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:34:15.099602  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:34:15.115911  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/embed-certs-443380/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:34:15.132583  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:34:15.148682  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:34:15.165117  269329 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:34:15.183162  269329 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:34:15.194950  269329 ssh_runner.go:195] Run: openssl version
	I1119 22:34:15.200885  269329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:34:15.210240  269329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:15.214093  269329 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:15.214140  269329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:15.250800  269329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:34:15.259182  269329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:34:15.267198  269329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:34:15.270686  269329 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:34:15.270738  269329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:34:15.306099  269329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:34:15.313985  269329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:34:15.321743  269329 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:34:15.325437  269329 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:34:15.325484  269329 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:34:15.360370  269329 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:34:15.367919  269329 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:34:15.371285  269329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:34:15.404648  269329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:34:15.438267  269329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:34:15.470944  269329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:34:15.515917  269329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:34:15.564527  269329 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:34:15.621902  269329 kubeadm.go:401] StartCluster: {Name:embed-certs-443380 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-443380 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:15.622008  269329 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:34:15.622067  269329 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:34:15.658155  269329 cri.go:89] found id: "847f5d7dba3ab17916fecc3496f64e3c432a7aea38029dc58d6ca5c607f49bf4"
	I1119 22:34:15.658178  269329 cri.go:89] found id: "f2e2adcbdf2ed28a414676c53047f68a57fcf6fb525c42cea338059bedb6224c"
	I1119 22:34:15.658183  269329 cri.go:89] found id: "185e753f982bb76405831c8b358ebdfd082e42f64259200ff2771e2287ccd2a7"
	I1119 22:34:15.658187  269329 cri.go:89] found id: "de4131eab48f0dd8d34f317e598532f0311ff6539bf32deb7148043cda0db569"
	I1119 22:34:15.658195  269329 cri.go:89] found id: ""
	I1119 22:34:15.658240  269329 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:34:15.670596  269329 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:15Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:34:15.670674  269329 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:34:15.679497  269329 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:34:15.679514  269329 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:34:15.679555  269329 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:34:15.687061  269329 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:34:15.687852  269329 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-443380" does not appear in /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:15.688281  269329 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-9335/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-443380" cluster setting kubeconfig missing "embed-certs-443380" context setting]
	I1119 22:34:15.689006  269329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:15.690732  269329 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:34:15.699021  269329 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1119 22:34:15.699045  269329 kubeadm.go:602] duration metric: took 19.525442ms to restartPrimaryControlPlane
	I1119 22:34:15.699053  269329 kubeadm.go:403] duration metric: took 77.164381ms to StartCluster
	I1119 22:34:15.699068  269329 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:15.699121  269329 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:15.700853  269329 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:15.701066  269329 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:34:15.701125  269329 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:34:15.701224  269329 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-443380"
	I1119 22:34:15.701246  269329 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-443380"
	W1119 22:34:15.701256  269329 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:34:15.701280  269329 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:15.701295  269329 addons.go:70] Setting default-storageclass=true in profile "embed-certs-443380"
	I1119 22:34:15.701309  269329 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-443380"
	I1119 22:34:15.701286  269329 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:34:15.701656  269329 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:34:15.701290  269329 addons.go:70] Setting dashboard=true in profile "embed-certs-443380"
	I1119 22:34:15.701840  269329 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:34:15.701877  269329 addons.go:239] Setting addon dashboard=true in "embed-certs-443380"
	W1119 22:34:15.701892  269329 addons.go:248] addon dashboard should already be in state true
	I1119 22:34:15.701925  269329 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:34:15.702413  269329 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:34:15.703305  269329 out.go:179] * Verifying Kubernetes components...
	I1119 22:34:15.704939  269329 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:15.731645  269329 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:34:15.731735  269329 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:34:15.733264  269329 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:15.733280  269329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:34:15.733329  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:15.733375  269329 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:34:11.874081  265374 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:34:11.878228  265374 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:34:11.878249  265374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:34:11.890748  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:34:12.102401  265374 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:34:12.102486  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:12.102520  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-949690 minikube.k8s.io/updated_at=2025_11_19T22_34_12_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=newest-cni-949690 minikube.k8s.io/primary=true
	I1119 22:34:12.112155  265374 ops.go:34] apiserver oom_adj: -16
	I1119 22:34:12.195274  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:12.695565  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:13.196024  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:13.695358  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:14.195521  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:14.695690  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:15.196243  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:15.696011  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:16.195862  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:14.021901  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1119 22:34:14.021964  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:14.022019  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:14.049327  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:14.049349  229026 cri.go:89] found id: "4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:34:14.049355  229026 cri.go:89] found id: ""
	I1119 22:34:14.049364  229026 logs.go:282] 2 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f]
	I1119 22:34:14.049413  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:14.053406  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:14.056925  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:14.056990  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:14.085505  229026 cri.go:89] found id: ""
	I1119 22:34:14.085529  229026 logs.go:282] 0 containers: []
	W1119 22:34:14.085538  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:14.085547  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:14.085600  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:14.118340  229026 cri.go:89] found id: ""
	I1119 22:34:14.118366  229026 logs.go:282] 0 containers: []
	W1119 22:34:14.118377  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:14.118385  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:14.118429  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:14.144537  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:14.144562  229026 cri.go:89] found id: ""
	I1119 22:34:14.144573  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:14.144632  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:14.148346  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:14.148410  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:14.174381  229026 cri.go:89] found id: ""
	I1119 22:34:14.174404  229026 logs.go:282] 0 containers: []
	W1119 22:34:14.174414  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:14.174421  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:14.174479  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:14.207485  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:14.207508  229026 cri.go:89] found id: "22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12"
	I1119 22:34:14.207519  229026 cri.go:89] found id: ""
	I1119 22:34:14.207530  229026 logs.go:282] 2 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c 22bd49dbbad112cd563ac66b4d8860827ecfb21122c2186ee5e3bbd09616ef12]
	I1119 22:34:14.207584  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:14.212072  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:14.215638  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:14.215701  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:14.241950  229026 cri.go:89] found id: ""
	I1119 22:34:14.241972  229026 logs.go:282] 0 containers: []
	W1119 22:34:14.241981  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:14.241988  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:14.242040  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:14.272382  229026 cri.go:89] found id: ""
	I1119 22:34:14.272405  229026 logs.go:282] 0 containers: []
	W1119 22:34:14.272416  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:14.272433  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:14.272449  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:14.305938  229026 logs.go:123] Gathering logs for kube-apiserver [4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f] ...
	I1119 22:34:14.305965  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4ce3b243dbf50e7debc5833c010cd8651b202feb5173acf363d027ac5d8b928f"
	I1119 22:34:14.337570  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:14.337595  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:14.386602  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:14.386625  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:14.417578  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:14.417606  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:14.462943  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:14.462968  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:14.478803  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:14.478836  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1119 22:34:16.696429  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:17.197955  265374 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:34:17.329719  265374 kubeadm.go:1114] duration metric: took 5.227291365s to wait for elevateKubeSystemPrivileges
	I1119 22:34:17.329756  265374 kubeadm.go:403] duration metric: took 15.598100209s to StartCluster
	I1119 22:34:17.329777  265374 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:17.329881  265374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:17.331993  265374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:17.332230  265374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:34:17.332249  265374 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:34:17.332335  265374 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:34:17.332558  265374 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-949690"
	I1119 22:34:17.332597  265374 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-949690"
	I1119 22:34:17.332434  265374 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:17.332630  265374 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:17.332695  265374 addons.go:70] Setting default-storageclass=true in profile "newest-cni-949690"
	I1119 22:34:17.332718  265374 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-949690"
	I1119 22:34:17.333063  265374 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:17.333232  265374 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:17.334259  265374 out.go:179] * Verifying Kubernetes components...
	I1119 22:34:17.335924  265374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:17.362929  265374 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:34:15.733681  269329 addons.go:239] Setting addon default-storageclass=true in "embed-certs-443380"
	W1119 22:34:15.733714  269329 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:34:15.733741  269329 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:34:15.734411  269329 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:34:15.736932  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:34:15.736950  269329 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:34:15.737001  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:15.766563  269329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:34:15.770998  269329 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:15.771019  269329 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:34:15.771074  269329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:34:15.782479  269329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:34:15.794972  269329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:34:15.886519  269329 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:15.892341  269329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:15.906238  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:34:15.906264  269329 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:34:15.907208  269329 node_ready.go:35] waiting up to 6m0s for node "embed-certs-443380" to be "Ready" ...
	I1119 22:34:15.913176  269329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:15.922365  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:34:15.922385  269329 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:34:15.942481  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:34:15.942509  269329 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 22:34:15.956218  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:34:15.956241  269329 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:34:15.972743  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:34:15.972767  269329 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:34:15.987735  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:34:15.987758  269329 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:34:16.001332  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:34:16.001415  269329 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:34:16.014360  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:34:16.014378  269329 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:34:16.027785  269329 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:34:16.027806  269329 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:34:16.040251  269329 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:34:17.221942  269329 node_ready.go:49] node "embed-certs-443380" is "Ready"
	I1119 22:34:17.221980  269329 node_ready.go:38] duration metric: took 1.314747375s for node "embed-certs-443380" to be "Ready" ...
	I1119 22:34:17.221997  269329 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:34:17.222049  269329 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:34:17.364169  265374 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:17.364186  265374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:34:17.364263  265374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:17.366157  265374 addons.go:239] Setting addon default-storageclass=true in "newest-cni-949690"
	I1119 22:34:17.366204  265374 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:17.366722  265374 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:17.393701  265374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:17.399008  265374 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:17.399038  265374 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:34:17.399089  265374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:17.424634  265374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:17.448544  265374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:34:17.515703  265374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:17.524050  265374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:17.538610  265374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:17.677834  265374 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1119 22:34:17.679740  265374 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:34:17.679809  265374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:34:17.883022  265374 api_server.go:72] duration metric: took 550.739786ms to wait for apiserver process to appear ...
	I1119 22:34:17.883049  265374 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:34:17.883071  265374 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:17.888640  265374 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 22:34:17.889036  265374 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1119 22:34:17.889683  269329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.99731476s)
	I1119 22:34:17.889744  269329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.976517445s)
	I1119 22:34:17.889864  269329 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.849574708s)
	I1119 22:34:17.889897  269329 api_server.go:72] duration metric: took 2.188803406s to wait for apiserver process to appear ...
	I1119 22:34:17.889927  269329 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:34:17.889946  269329 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1119 22:34:17.891970  269329 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-443380 addons enable metrics-server
	
	I1119 22:34:17.896401  269329 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:34:17.896432  269329 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:34:17.902052  269329 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 22:34:17.889545  265374 api_server.go:141] control plane version: v1.34.1
	I1119 22:34:17.889584  265374 api_server.go:131] duration metric: took 6.527677ms to wait for apiserver health ...
	I1119 22:34:17.889596  265374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:34:17.890736  265374 addons.go:515] duration metric: took 558.395222ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1119 22:34:17.892449  265374 system_pods.go:59] 8 kube-system pods found
	I1119 22:34:17.892480  265374 system_pods.go:61] "coredns-66bc5c9577-wjbzn" [be4fac81-534c-4a17-b208-8ad44d7e9504] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:34:17.892501  265374 system_pods.go:61] "etcd-newest-cni-949690" [77f0100c-0902-434d-9782-9ff8d579d2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:34:17.892518  265374 system_pods.go:61] "kindnet-fw45d" [b409ae83-4d6c-42a0-a436-2159f75e1458] Running
	I1119 22:34:17.892525  265374 system_pods.go:61] "kube-apiserver-newest-cni-949690" [8dce48d6-c1e0-4cae-a68a-c5dbf4a62adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:34:17.892534  265374 system_pods.go:61] "kube-controller-manager-newest-cni-949690" [f61aadf5-fe6a-4566-a44e-f98c9b09b812] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:34:17.892543  265374 system_pods.go:61] "kube-proxy-f98bb" [391d2f06-e215-4d11-a63e-36749e0fdf39] Running
	I1119 22:34:17.892553  265374 system_pods.go:61] "kube-scheduler-newest-cni-949690" [04596963-6c61-45c1-bbcb-59e57760f2b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:34:17.892562  265374 system_pods.go:61] "storage-provisioner" [11651cac-2eb3-47f8-be2c-b30375bc4461] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:34:17.892576  265374 system_pods.go:74] duration metric: took 2.973354ms to wait for pod list to return data ...
	I1119 22:34:17.892588  265374 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:34:17.894657  265374 default_sa.go:45] found service account: "default"
	I1119 22:34:17.894676  265374 default_sa.go:55] duration metric: took 2.081186ms for default service account to be created ...
	I1119 22:34:17.894688  265374 kubeadm.go:587] duration metric: took 562.409787ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:34:17.894704  265374 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:34:17.896874  265374 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:34:17.896898  265374 node_conditions.go:123] node cpu capacity is 8
	I1119 22:34:17.896912  265374 node_conditions.go:105] duration metric: took 2.202788ms to run NodePressure ...
	I1119 22:34:17.896923  265374 start.go:242] waiting for startup goroutines ...
	I1119 22:34:18.182547  265374 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-949690" context rescaled to 1 replicas
	I1119 22:34:18.182592  265374 start.go:247] waiting for cluster config update ...
	I1119 22:34:18.182606  265374 start.go:256] writing updated cluster config ...
	I1119 22:34:18.182951  265374 ssh_runner.go:195] Run: rm -f paused
	I1119 22:34:18.239406  265374 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:34:18.241456  265374 out.go:179] * Done! kubectl is now configured to use "newest-cni-949690" cluster and "default" namespace by default
	W1119 22:34:15.006701  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	W1119 22:34:17.008342  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.007087158Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.009709417Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=635220d9-b62b-4800-b989-e7703737e24d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.012638281Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.013149248Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e8277fe8-0f3f-4056-bb53-5bf91928f564 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.013572121Z" level=info msg="Ran pod sandbox a5e559611d8a98d14a608b77916926a758d45c696239aa1c8eee5e1b52ece70f with infra container: kube-system/kube-proxy-f98bb/POD" id=635220d9-b62b-4800-b989-e7703737e24d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.014747617Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.01476985Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d6ed99e0-c6ad-47b5-a8cb-3bc744859df0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.015588825Z" level=info msg="Ran pod sandbox 5ed534ee4fd0387d2f534beb431e4247dcc5deae8fe316c6e3858fa3e58060b8 with infra container: kube-system/kindnet-fw45d/POD" id=e8277fe8-0f3f-4056-bb53-5bf91928f564 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.015933035Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c0f6d421-1587-4a58-be9d-121f9e27d86a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.016616931Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=99914cc6-0aa0-4189-be07-f7140f758fb0 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.017459211Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3c5f097c-1449-4877-8770-2fb7d4693f34 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.020417728Z" level=info msg="Creating container: kube-system/kube-proxy-f98bb/kube-proxy" id=f298190d-514b-4f1d-841d-b2bf02912b24 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.020534958Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.024684477Z" level=info msg="Creating container: kube-system/kindnet-fw45d/kindnet-cni" id=1f953b3c-c51c-47fb-9240-5e3679982436 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.02478263Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.026179968Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.026790238Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.032678422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.033246901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.059324454Z" level=info msg="Created container 5d82219638c3a7e98e53637efc516715b2f6884e9979b82f383acdd27fc74105: kube-system/kindnet-fw45d/kindnet-cni" id=1f953b3c-c51c-47fb-9240-5e3679982436 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.059984013Z" level=info msg="Starting container: 5d82219638c3a7e98e53637efc516715b2f6884e9979b82f383acdd27fc74105" id=f848d260-1798-464b-82c1-16b44334bc3b name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.061886759Z" level=info msg="Started container" PID=1519 containerID=5d82219638c3a7e98e53637efc516715b2f6884e9979b82f383acdd27fc74105 description=kube-system/kindnet-fw45d/kindnet-cni id=f848d260-1798-464b-82c1-16b44334bc3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ed534ee4fd0387d2f534beb431e4247dcc5deae8fe316c6e3858fa3e58060b8
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.063035165Z" level=info msg="Created container f1ac5d780bfead88e1e4a5fb9304aea707f3d0dac846c8696fd16527de00e51a: kube-system/kube-proxy-f98bb/kube-proxy" id=f298190d-514b-4f1d-841d-b2bf02912b24 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.063479745Z" level=info msg="Starting container: f1ac5d780bfead88e1e4a5fb9304aea707f3d0dac846c8696fd16527de00e51a" id=db4b2ae4-8157-4250-b5c6-89f5ec18a370 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:17 newest-cni-949690 crio[777]: time="2025-11-19T22:34:17.066042832Z" level=info msg="Started container" PID=1518 containerID=f1ac5d780bfead88e1e4a5fb9304aea707f3d0dac846c8696fd16527de00e51a description=kube-system/kube-proxy-f98bb/kube-proxy id=db4b2ae4-8157-4250-b5c6-89f5ec18a370 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5e559611d8a98d14a608b77916926a758d45c696239aa1c8eee5e1b52ece70f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5d82219638c3a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   5ed534ee4fd03       kindnet-fw45d                               kube-system
	f1ac5d780bfea       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   a5e559611d8a9       kube-proxy-f98bb                            kube-system
	d5ac7d8f2d400       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   b2e02e5f058d7       kube-apiserver-newest-cni-949690            kube-system
	f279804944200       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   450abf92b0a19       kube-scheduler-newest-cni-949690            kube-system
	dc6148cddef33       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   09038a5cb9ac1       etcd-newest-cni-949690                      kube-system
	be681b4a0f127       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   2a6c88e6b4331       kube-controller-manager-newest-cni-949690   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-949690
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-949690
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=newest-cni-949690
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_34_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:34:09 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-949690
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:34:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:34:11 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:34:11 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:34:11 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 22:34:11 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-949690
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                76884ddf-0fb7-4736-8296-1d7cf95f4d03
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-949690                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-fw45d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-949690             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-949690    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-f98bb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-949690             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-949690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-949690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-949690 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-949690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-949690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-949690 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-949690 event: Registered Node newest-cni-949690 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [dc6148cddef33b9b4b685df1a37b7e46231756032aca871b48d628b3a233fa14] <==
	{"level":"warn","ts":"2025-11-19T22:34:08.288061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.296665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.302925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.309732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.316285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.322580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.330033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.337171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.344354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.357956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.364191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.370697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.377415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.384651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.392320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.398629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.405646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.413438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.420430Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.427199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.433865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.446317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.452696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.459906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:08.510226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49274","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:34:19 up  1:16,  0 user,  load average: 3.54, 2.90, 1.92
	Linux newest-cni-949690 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d82219638c3a7e98e53637efc516715b2f6884e9979b82f383acdd27fc74105] <==
	I1119 22:34:17.284603       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:34:17.284947       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:34:17.285096       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:34:17.285118       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:34:17.285137       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:34:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:34:17.491470       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:34:17.491900       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:34:17.491920       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:34:17.574582       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [d5ac7d8f2d400fb412a4d05268a342fe66c43b8892e5c3480442042858f3f06e] <==
	I1119 22:34:09.019006       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:34:09.019016       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:34:09.019024       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:34:09.021608       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:34:09.021682       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:34:09.033582       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:34:09.034354       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:34:09.198109       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:34:09.913836       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:34:09.917475       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:34:09.917495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:34:10.324141       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:34:10.356533       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:34:10.415108       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:34:10.420050       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1119 22:34:10.420885       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:34:10.424306       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:34:10.928312       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:34:11.260544       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:34:11.268892       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:34:11.276086       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:34:16.680656       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1119 22:34:16.880322       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:34:16.979919       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:34:16.984035       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [be681b4a0f12728ee0b8dcf4b7d86c577fa2128c4b700a3a7c834cd47219c7d1] <==
	I1119 22:34:15.926952       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:34:15.927148       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 22:34:15.927903       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:34:15.928059       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:34:15.927985       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:34:15.928010       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1119 22:34:15.928026       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:34:15.928029       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:34:15.928037       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1119 22:34:15.928373       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:34:15.928009       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:34:15.929019       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:34:15.930253       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:34:15.930325       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:34:15.930335       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:34:15.931293       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:34:15.931389       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1119 22:34:15.931453       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1119 22:34:15.931476       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1119 22:34:15.931482       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:34:15.931488       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:34:15.931545       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:34:15.934213       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1119 22:34:15.940621       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-949690" podCIDRs=["10.42.0.0/24"]
	I1119 22:34:15.955701       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f1ac5d780bfead88e1e4a5fb9304aea707f3d0dac846c8696fd16527de00e51a] <==
	I1119 22:34:17.129269       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:34:17.213101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:34:17.314941       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:34:17.315117       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:34:17.315315       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:34:17.349094       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:34:17.349620       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:34:17.360385       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:34:17.361380       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:34:17.361503       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:17.362847       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:34:17.363053       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:34:17.363168       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:34:17.363243       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:34:17.363360       1 config.go:309] "Starting node config controller"
	I1119 22:34:17.363390       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:34:17.363416       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:34:17.363135       1 config.go:200] "Starting service config controller"
	I1119 22:34:17.363578       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:34:17.464396       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:34:17.464467       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:34:17.464509       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f279804944200c99c22cf3a216dc1d5010caf739ad5a77e376fdd70a2de459d0] <==
	E1119 22:34:08.956893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:34:08.956921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:34:08.956945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:34:08.956973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:34:08.956855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:34:08.957201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:34:08.957306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 22:34:08.957472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:34:08.957665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:34:08.957707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:34:08.957878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:34:08.957938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:34:08.957929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:34:08.958135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:34:08.958146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:34:08.958159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:34:08.958161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:34:09.770570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:34:09.827707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:34:09.876017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:34:09.912091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:34:09.954555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1119 22:34:09.986509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:34:09.995429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1119 22:34:12.653189       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:34:11 newest-cni-949690 kubelet[1321]: I1119 22:34:11.386296    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5e8e1a0473a770200839cfd8663e47c1-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-949690\" (UID: \"5e8e1a0473a770200839cfd8663e47c1\") " pod="kube-system/kube-controller-manager-newest-cni-949690"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.076889    1321 apiserver.go:52] "Watching apiserver"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.085383    1321 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.114011    1321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-949690"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.114105    1321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-949690"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.114375    1321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-949690"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: E1119 22:34:12.124717    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-949690\" already exists" pod="kube-system/etcd-newest-cni-949690"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: E1119 22:34:12.124771    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-949690\" already exists" pod="kube-system/kube-apiserver-newest-cni-949690"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: E1119 22:34:12.124724    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-949690\" already exists" pod="kube-system/kube-scheduler-newest-cni-949690"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.161696    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-949690" podStartSLOduration=1.161675887 podStartE2EDuration="1.161675887s" podCreationTimestamp="2025-11-19 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:34:12.161648447 +0000 UTC m=+1.140226131" watchObservedRunningTime="2025-11-19 22:34:12.161675887 +0000 UTC m=+1.140253551"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.181862    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-949690" podStartSLOduration=1.18184449 podStartE2EDuration="1.18184449s" podCreationTimestamp="2025-11-19 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:34:12.170882067 +0000 UTC m=+1.149459734" watchObservedRunningTime="2025-11-19 22:34:12.18184449 +0000 UTC m=+1.160422157"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.190408    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-949690" podStartSLOduration=1.190374892 podStartE2EDuration="1.190374892s" podCreationTimestamp="2025-11-19 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:34:12.181842269 +0000 UTC m=+1.160419926" watchObservedRunningTime="2025-11-19 22:34:12.190374892 +0000 UTC m=+1.168952563"
	Nov 19 22:34:12 newest-cni-949690 kubelet[1321]: I1119 22:34:12.201283    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-949690" podStartSLOduration=1.201263436 podStartE2EDuration="1.201263436s" podCreationTimestamp="2025-11-19 22:34:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:34:12.190670866 +0000 UTC m=+1.169248515" watchObservedRunningTime="2025-11-19 22:34:12.201263436 +0000 UTC m=+1.179841107"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.022433    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.023230    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.722672    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/391d2f06-e215-4d11-a63e-36749e0fdf39-kube-proxy\") pod \"kube-proxy-f98bb\" (UID: \"391d2f06-e215-4d11-a63e-36749e0fdf39\") " pod="kube-system/kube-proxy-f98bb"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.722709    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/391d2f06-e215-4d11-a63e-36749e0fdf39-xtables-lock\") pod \"kube-proxy-f98bb\" (UID: \"391d2f06-e215-4d11-a63e-36749e0fdf39\") " pod="kube-system/kube-proxy-f98bb"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.722725    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/391d2f06-e215-4d11-a63e-36749e0fdf39-lib-modules\") pod \"kube-proxy-f98bb\" (UID: \"391d2f06-e215-4d11-a63e-36749e0fdf39\") " pod="kube-system/kube-proxy-f98bb"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.722746    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvvw\" (UniqueName: \"kubernetes.io/projected/391d2f06-e215-4d11-a63e-36749e0fdf39-kube-api-access-qbvvw\") pod \"kube-proxy-f98bb\" (UID: \"391d2f06-e215-4d11-a63e-36749e0fdf39\") " pod="kube-system/kube-proxy-f98bb"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.722801    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-cni-cfg\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.722861    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-lib-modules\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.722882    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9g49\" (UniqueName: \"kubernetes.io/projected/b409ae83-4d6c-42a0-a436-2159f75e1458-kube-api-access-f9g49\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:16 newest-cni-949690 kubelet[1321]: I1119 22:34:16.722902    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-xtables-lock\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:17 newest-cni-949690 kubelet[1321]: I1119 22:34:17.141743    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fw45d" podStartSLOduration=1.141717264 podStartE2EDuration="1.141717264s" podCreationTimestamp="2025-11-19 22:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:34:17.141261423 +0000 UTC m=+6.119839114" watchObservedRunningTime="2025-11-19 22:34:17.141717264 +0000 UTC m=+6.120294931"
	Nov 19 22:34:17 newest-cni-949690 kubelet[1321]: I1119 22:34:17.154254    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f98bb" podStartSLOduration=1.154231012 podStartE2EDuration="1.154231012s" podCreationTimestamp="2025-11-19 22:34:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:34:17.153888122 +0000 UTC m=+6.132465789" watchObservedRunningTime="2025-11-19 22:34:17.154231012 +0000 UTC m=+6.132808679"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-949690 -n newest-cni-949690
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-949690 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wjbzn storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner: exit status 1 (58.180685ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wjbzn" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-409987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-409987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.029072ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-409987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-409987 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-409987 describe deploy/metrics-server -n kube-system: exit status 1 (68.571642ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-409987 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-409987
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-409987:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974",
	        "Created": "2025-11-19T22:33:29.234870853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 259126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:33:29.271364154Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/hosts",
	        "LogPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974-json.log",
	        "Name": "/default-k8s-diff-port-409987",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-409987:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-409987",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974",
	                "LowerDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-409987",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-409987/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-409987",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-409987",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-409987",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1946a79d7a55788219eeebcbff99ad4b44ade3da87c18935555f1e916bbae5c3",
	            "SandboxKey": "/var/run/docker/netns/1946a79d7a55",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-409987": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03e1882d811d99da2a01a21670ff1bc38787a9ad8aa320e4d377f6f9c0dda9f8",
	                    "EndpointID": "d366783c9cca4b923171f627f3616d023b8f79f5a602fcb2f37fb9f4a37287c6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "d6:d9:bb:ff:56:4e",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-409987",
	                        "1cd68db04c75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-409987 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:32 UTC │
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p disable-driver-mounts-726490                                                                                                                                                                                                               │ disable-driver-mounts-726490 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ stop    │ -p embed-certs-443380 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-443380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p newest-cni-949690 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-949690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-409987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ image   │ newest-cni-949690 image list --format=json                                                                                                                                                                                                    │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ pause   │ -p newest-cni-949690 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:34:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:34:33.330295  274229 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:34:33.330411  274229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:33.330420  274229 out.go:374] Setting ErrFile to fd 2...
	I1119 22:34:33.330425  274229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:33.330632  274229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:34:33.331098  274229 out.go:368] Setting JSON to false
	I1119 22:34:33.332209  274229 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4621,"bootTime":1763587052,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:34:33.332295  274229 start.go:143] virtualization: kvm guest
	I1119 22:34:33.334075  274229 out.go:179] * [newest-cni-949690] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:34:33.335316  274229 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:34:33.335334  274229 notify.go:221] Checking for updates...
	I1119 22:34:33.337262  274229 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:34:33.338454  274229 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:33.339494  274229 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:34:33.340628  274229 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:34:33.341750  274229 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:34:33.343306  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:33.343856  274229 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:34:33.368362  274229 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:34:33.368450  274229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:33.423361  274229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:34:33.414091828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:33.423470  274229 docker.go:319] overlay module found
	I1119 22:34:33.425102  274229 out.go:179] * Using the docker driver based on existing profile
	I1119 22:34:33.426204  274229 start.go:309] selected driver: docker
	I1119 22:34:33.426217  274229 start.go:930] validating driver "docker" against &{Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:33.426303  274229 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:34:33.427062  274229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:33.482572  274229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:34:33.473412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:33.482955  274229 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:34:33.482993  274229 cni.go:84] Creating CNI manager for ""
	I1119 22:34:33.483056  274229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:33.483099  274229 start.go:353] cluster config:
	{Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:33.484922  274229 out.go:179] * Starting "newest-cni-949690" primary control-plane node in "newest-cni-949690" cluster
	I1119 22:34:33.486015  274229 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:34:33.487084  274229 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:34:33.487995  274229 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:34:33.488024  274229 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:34:33.488046  274229 cache.go:65] Caching tarball of preloaded images
	I1119 22:34:33.488079  274229 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:34:33.488138  274229 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:34:33.488153  274229 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:34:33.488259  274229 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/config.json ...
	I1119 22:34:33.507602  274229 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:34:33.507617  274229 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:34:33.507631  274229 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:34:33.507654  274229 start.go:360] acquireMachinesLock for newest-cni-949690: {Name:mk317921465b37fc459423448fcaa153e30f6967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:34:33.507709  274229 start.go:364] duration metric: took 39.568µs to acquireMachinesLock for "newest-cni-949690"
	I1119 22:34:33.507725  274229 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:34:33.507730  274229 fix.go:54] fixHost starting: 
	I1119 22:34:33.507951  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:33.524445  274229 fix.go:112] recreateIfNeeded on newest-cni-949690: state=Stopped err=<nil>
	W1119 22:34:33.524473  274229 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 22:34:29.505783  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	W1119 22:34:31.506161  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	I1119 22:34:33.011714  257842 node_ready.go:49] node "default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:33.011742  257842 node_ready.go:38] duration metric: took 41.008374378s for node "default-k8s-diff-port-409987" to be "Ready" ...
	I1119 22:34:33.011757  257842 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:34:33.011802  257842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:34:33.024549  257842 api_server.go:72] duration metric: took 41.352426943s to wait for apiserver process to appear ...
	I1119 22:34:33.024573  257842 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:34:33.024593  257842 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:34:33.029923  257842 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1119 22:34:33.031006  257842 api_server.go:141] control plane version: v1.34.1
	I1119 22:34:33.031027  257842 api_server.go:131] duration metric: took 6.447983ms to wait for apiserver health ...
	I1119 22:34:33.031036  257842 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:34:33.034211  257842 system_pods.go:59] 8 kube-system pods found
	I1119 22:34:33.034250  257842 system_pods.go:61] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.034260  257842 system_pods.go:61] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.034272  257842 system_pods.go:61] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.034277  257842 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.034286  257842 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.034295  257842 system_pods.go:61] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.034300  257842 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.034308  257842 system_pods.go:61] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.034318  257842 system_pods.go:74] duration metric: took 3.273983ms to wait for pod list to return data ...
	I1119 22:34:33.034333  257842 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:34:33.036602  257842 default_sa.go:45] found service account: "default"
	I1119 22:34:33.036620  257842 default_sa.go:55] duration metric: took 2.277845ms for default service account to be created ...
	I1119 22:34:33.036630  257842 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:34:33.039135  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.039163  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.039169  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.039175  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.039178  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.039184  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.039191  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.039194  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.039199  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.039218  257842 retry.go:31] will retry after 283.539767ms: missing components: kube-dns
	I1119 22:34:33.329109  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.329139  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.329145  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.329150  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.329154  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.329157  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.329161  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.329164  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.329176  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.329193  257842 retry.go:31] will retry after 250.82065ms: missing components: kube-dns
	I1119 22:34:33.583473  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.583501  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.583507  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.583513  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.583516  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.583520  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.583524  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.583528  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.583531  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Running
	I1119 22:34:33.583545  257842 retry.go:31] will retry after 471.945976ms: missing components: kube-dns
	I1119 22:34:34.059943  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:34.059977  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Running
	I1119 22:34:34.059986  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:34.059993  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:34.059999  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:34.060005  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:34.060011  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:34.060016  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:34.060021  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Running
	I1119 22:34:34.060030  257842 system_pods.go:126] duration metric: took 1.023393605s to wait for k8s-apps to be running ...
	I1119 22:34:34.060042  257842 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:34:34.060088  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:34:34.085046  257842 system_svc.go:56] duration metric: took 24.992513ms WaitForService to wait for kubelet
	I1119 22:34:34.085085  257842 kubeadm.go:587] duration metric: took 42.412965914s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:34:34.085108  257842 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:34:34.088575  257842 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:34:34.088604  257842 node_conditions.go:123] node cpu capacity is 8
	I1119 22:34:34.088620  257842 node_conditions.go:105] duration metric: took 3.505513ms to run NodePressure ...
	I1119 22:34:34.088635  257842 start.go:242] waiting for startup goroutines ...
	I1119 22:34:34.088645  257842 start.go:247] waiting for cluster config update ...
	I1119 22:34:34.088659  257842 start.go:256] writing updated cluster config ...
	I1119 22:34:34.088995  257842 ssh_runner.go:195] Run: rm -f paused
	I1119 22:34:34.093920  257842 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:34:34.097808  257842 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jv7mb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.104292  257842 pod_ready.go:94] pod "coredns-66bc5c9577-jv7mb" is "Ready"
	I1119 22:34:34.104315  257842 pod_ready.go:86] duration metric: took 6.453567ms for pod "coredns-66bc5c9577-jv7mb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.106517  257842 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.110627  257842 pod_ready.go:94] pod "etcd-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.110663  257842 pod_ready.go:86] duration metric: took 4.119698ms for pod "etcd-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.112556  257842 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.116315  257842 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.116335  257842 pod_ready.go:86] duration metric: took 3.757821ms for pod "kube-apiserver-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.118900  257842 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.497369  257842 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.497391  257842 pod_ready.go:86] duration metric: took 378.471441ms for pod "kube-controller-manager-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.698377  257842 pod_ready.go:83] waiting for pod "kube-proxy-ph6ff" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.097763  257842 pod_ready.go:94] pod "kube-proxy-ph6ff" is "Ready"
	I1119 22:34:35.097786  257842 pod_ready.go:86] duration metric: took 399.387132ms for pod "kube-proxy-ph6ff" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.297421  257842 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.697562  257842 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:35.697595  257842 pod_ready.go:86] duration metric: took 400.149921ms for pod "kube-scheduler-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.697609  257842 pod_ready.go:40] duration metric: took 1.60365602s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:34:35.740250  257842 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:34:35.742579  257842 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-409987" cluster and "default" namespace by default
	I1119 22:34:34.015894  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:34.016410  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:34.016473  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:34.016533  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:34.044039  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:34.044060  229026 cri.go:89] found id: ""
	I1119 22:34:34.044070  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:34.044121  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.048072  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:34.048123  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:34.085696  229026 cri.go:89] found id: ""
	I1119 22:34:34.085724  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.085736  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:34.085746  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:34.085851  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:34.120603  229026 cri.go:89] found id: ""
	I1119 22:34:34.120627  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.120636  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:34.120645  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:34.120708  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:34.145396  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:34.145417  229026 cri.go:89] found id: ""
	I1119 22:34:34.145428  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:34.145476  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.149506  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:34.149574  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:34.176649  229026 cri.go:89] found id: ""
	I1119 22:34:34.176674  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.176684  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:34.176691  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:34.176744  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:34.203378  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:34.203395  229026 cri.go:89] found id: ""
	I1119 22:34:34.203402  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:34.203443  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.207412  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:34.207488  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:34.233093  229026 cri.go:89] found id: ""
	I1119 22:34:34.233114  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.233121  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:34.233127  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:34.233168  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:34.259032  229026 cri.go:89] found id: ""
	I1119 22:34:34.259056  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.259065  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:34.259076  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:34.259096  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:34.290407  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:34.290442  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:34.340448  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:34.340475  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:34.366016  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:34.366045  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:34.409566  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:34.409591  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:34.437163  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:34.437189  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:34.530916  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:34.530943  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:34.544403  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:34.544423  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:34.596039  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1119 22:34:33.445596  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:35.944355  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	I1119 22:34:33.525962  274229 out.go:252] * Restarting existing docker container for "newest-cni-949690" ...
	I1119 22:34:33.526026  274229 cli_runner.go:164] Run: docker start newest-cni-949690
	I1119 22:34:33.807804  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:33.826302  274229 kic.go:430] container "newest-cni-949690" state is running.
	I1119 22:34:33.826759  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:33.844694  274229 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/config.json ...
	I1119 22:34:33.844930  274229 machine.go:94] provisionDockerMachine start ...
	I1119 22:34:33.845009  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:33.863360  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:33.863582  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:33.863594  274229 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:34:33.864325  274229 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60156->127.0.0.1:33093: read: connection reset by peer
	I1119 22:34:36.995210  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-949690
	
	I1119 22:34:36.995240  274229 ubuntu.go:182] provisioning hostname "newest-cni-949690"
	I1119 22:34:36.995297  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.013235  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.013489  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.013510  274229 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-949690 && echo "newest-cni-949690" | sudo tee /etc/hostname
	I1119 22:34:37.147228  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-949690
	
	I1119 22:34:37.147327  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.168935  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.169231  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.169259  274229 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-949690' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-949690/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-949690' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:34:37.298184  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:34:37.298215  274229 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:34:37.298261  274229 ubuntu.go:190] setting up certificates
	I1119 22:34:37.298284  274229 provision.go:84] configureAuth start
	I1119 22:34:37.298344  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:37.319706  274229 provision.go:143] copyHostCerts
	I1119 22:34:37.319771  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:34:37.319788  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:34:37.319891  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:34:37.320027  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:34:37.320044  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:34:37.320101  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:34:37.320226  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:34:37.320235  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:34:37.320278  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:34:37.320347  274229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.newest-cni-949690 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-949690]
	I1119 22:34:37.636299  274229 provision.go:177] copyRemoteCerts
	I1119 22:34:37.636353  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:34:37.636390  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.656778  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:37.748239  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:34:37.765194  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:34:37.781622  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:34:37.797963  274229 provision.go:87] duration metric: took 499.66535ms to configureAuth
	I1119 22:34:37.797984  274229 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:34:37.798154  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:37.798258  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.817180  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.817381  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.817398  274229 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:34:38.091892  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:34:38.091918  274229 machine.go:97] duration metric: took 4.246971119s to provisionDockerMachine
	I1119 22:34:38.091933  274229 start.go:293] postStartSetup for "newest-cni-949690" (driver="docker")
	I1119 22:34:38.091945  274229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:34:38.092012  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:34:38.092060  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.109860  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.200247  274229 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:34:38.203527  274229 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:34:38.203577  274229 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:34:38.203589  274229 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:34:38.203630  274229 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:34:38.203698  274229 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:34:38.203800  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:34:38.211127  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:34:38.227108  274229 start.go:296] duration metric: took 135.165199ms for postStartSetup
	I1119 22:34:38.227183  274229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:34:38.227217  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.245993  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.335573  274229 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:34:38.339698  274229 fix.go:56] duration metric: took 4.831963481s for fixHost
	I1119 22:34:38.339720  274229 start.go:83] releasing machines lock for "newest-cni-949690", held for 4.831999371s
	I1119 22:34:38.339779  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:38.357434  274229 ssh_runner.go:195] Run: cat /version.json
	I1119 22:34:38.357469  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.357551  274229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:34:38.357616  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.376364  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.376897  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.536558  274229 ssh_runner.go:195] Run: systemctl --version
	I1119 22:34:38.542682  274229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:34:38.575491  274229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:34:38.579783  274229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:34:38.579849  274229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:34:38.587790  274229 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:34:38.587811  274229 start.go:496] detecting cgroup driver to use...
	I1119 22:34:38.587851  274229 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:34:38.587888  274229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:34:38.601596  274229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:34:38.612897  274229 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:34:38.612941  274229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:34:38.625963  274229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:34:38.637676  274229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:34:38.714790  274229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:34:38.797980  274229 docker.go:234] disabling docker service ...
	I1119 22:34:38.798067  274229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:34:38.811449  274229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:34:38.822900  274229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:34:38.900719  274229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:34:38.974367  274229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:34:38.986294  274229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:34:38.999467  274229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:34:38.999519  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.008010  274229 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:34:39.008056  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.016160  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.024213  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.032158  274229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:34:39.039615  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.047544  274229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.055117  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.063061  274229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:34:39.069737  274229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:34:39.076429  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:39.154204  274229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:34:39.294029  274229 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:34:39.294103  274229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:34:39.297873  274229 start.go:564] Will wait 60s for crictl version
	I1119 22:34:39.297923  274229 ssh_runner.go:195] Run: which crictl
	I1119 22:34:39.301294  274229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:34:39.326952  274229 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:34:39.327014  274229 ssh_runner.go:195] Run: crio --version
	I1119 22:34:39.353361  274229 ssh_runner.go:195] Run: crio --version
	I1119 22:34:39.381895  274229 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:34:39.383022  274229 cli_runner.go:164] Run: docker network inspect newest-cni-949690 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:34:39.401052  274229 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:34:39.404988  274229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:34:39.416039  274229 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 22:34:39.417090  274229 kubeadm.go:884] updating cluster {Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:34:39.417211  274229 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:34:39.417261  274229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:34:39.448367  274229 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:34:39.448387  274229 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:34:39.448438  274229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:34:39.472423  274229 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:34:39.472440  274229 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:34:39.472447  274229 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 22:34:39.472535  274229 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-949690 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:34:39.472590  274229 ssh_runner.go:195] Run: crio config
	I1119 22:34:39.517350  274229 cni.go:84] Creating CNI manager for ""
	I1119 22:34:39.517368  274229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:39.517384  274229 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 22:34:39.517405  274229 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-949690 NodeName:newest-cni-949690 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:34:39.517529  274229 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-949690"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:34:39.517587  274229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:34:39.525082  274229 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:34:39.525134  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:34:39.532362  274229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 22:34:39.544100  274229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:34:39.555705  274229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:34:39.567225  274229 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:34:39.570464  274229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:34:39.579619  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:39.654928  274229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:39.677011  274229 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690 for IP: 192.168.103.2
	I1119 22:34:39.677035  274229 certs.go:195] generating shared ca certs ...
	I1119 22:34:39.677052  274229 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:39.677228  274229 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:34:39.677271  274229 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:34:39.677281  274229 certs.go:257] generating profile certs ...
	I1119 22:34:39.677353  274229 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/client.key
	I1119 22:34:39.677405  274229 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.key.7bbc5920
	I1119 22:34:39.677448  274229 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.key
	I1119 22:34:39.677558  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:34:39.677586  274229 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:34:39.677595  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:34:39.677616  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:34:39.677637  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:34:39.677658  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:34:39.677696  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:34:39.678358  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:34:39.696474  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:34:39.715655  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:34:39.733546  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:34:39.755609  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:34:39.773052  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:34:39.789276  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:34:39.805231  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:34:39.821006  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:34:39.836925  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:34:39.852723  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:34:39.869749  274229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:34:39.881905  274229 ssh_runner.go:195] Run: openssl version
	I1119 22:34:39.887500  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:34:39.895583  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.899048  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.899094  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.932764  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:34:39.939946  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:34:39.948790  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.952241  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.952289  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.986001  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:34:39.993282  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:34:40.001064  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.004497  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.004538  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.038594  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:34:40.046222  274229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:34:40.049695  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:34:40.083766  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:34:40.116704  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:34:40.150004  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:34:40.189464  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:34:40.244982  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:34:40.299485  274229 kubeadm.go:401] StartCluster: {Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:40.299589  274229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:34:40.299646  274229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:34:40.339137  274229 cri.go:89] found id: "161338ce75f2c0c31e0d4eff895db0725cce94686d5b7734a4417e80450100be"
	I1119 22:34:40.339168  274229 cri.go:89] found id: "e0bf6ee50782a92838fb4803387070f4bb8156a7272beb735ace68c4880b3665"
	I1119 22:34:40.339174  274229 cri.go:89] found id: "272ca1f3b39d6be33be2578290a83acd236c28ec785c58e870270aa855c550ea"
	I1119 22:34:40.339179  274229 cri.go:89] found id: "10b10591e7bf4e79373ab1993679cf4bedd7cef09257b365452ef6e96fcd19eb"
	I1119 22:34:40.339183  274229 cri.go:89] found id: ""
	I1119 22:34:40.339228  274229 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:34:40.355618  274229 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:40Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:34:40.355688  274229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:34:40.365585  274229 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:34:40.365604  274229 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:34:40.365646  274229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:34:40.374257  274229 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:34:40.375206  274229 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-949690" does not appear in /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:40.375750  274229 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-9335/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-949690" cluster setting kubeconfig missing "newest-cni-949690" context setting]
	I1119 22:34:40.376658  274229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.378292  274229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:34:40.387104  274229 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1119 22:34:40.387173  274229 kubeadm.go:602] duration metric: took 21.562325ms to restartPrimaryControlPlane
	I1119 22:34:40.387180  274229 kubeadm.go:403] duration metric: took 87.702724ms to StartCluster
	I1119 22:34:40.387230  274229 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.387328  274229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:40.390472  274229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.390929  274229 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:34:40.390865  274229 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:34:40.391006  274229 addons.go:70] Setting default-storageclass=true in profile "newest-cni-949690"
	I1119 22:34:40.391021  274229 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-949690"
	I1119 22:34:40.390986  274229 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-949690"
	I1119 22:34:40.391198  274229 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-949690"
	I1119 22:34:40.391210  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	W1119 22:34:40.391215  274229 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:34:40.391243  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.391358  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.390998  274229 addons.go:70] Setting dashboard=true in profile "newest-cni-949690"
	I1119 22:34:40.391554  274229 addons.go:239] Setting addon dashboard=true in "newest-cni-949690"
	W1119 22:34:40.391568  274229 addons.go:248] addon dashboard should already be in state true
	I1119 22:34:40.391602  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.391769  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.392330  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.393181  274229 out.go:179] * Verifying Kubernetes components...
	I1119 22:34:40.394355  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:40.418872  274229 addons.go:239] Setting addon default-storageclass=true in "newest-cni-949690"
	W1119 22:34:40.418895  274229 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:34:40.418931  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.419558  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.420933  274229 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:34:40.422045  274229 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:40.422131  274229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:34:40.422102  274229 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:34:40.422257  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.424181  274229 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:34:37.096614  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:37.096952  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:37.097004  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:37.097051  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:37.122255  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:37.122270  229026 cri.go:89] found id: ""
	I1119 22:34:37.122277  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:37.122315  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.125982  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:37.126034  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:37.151925  229026 cri.go:89] found id: ""
	I1119 22:34:37.151947  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.151958  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:37.151966  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:37.152013  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:37.179757  229026 cri.go:89] found id: ""
	I1119 22:34:37.179787  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.179796  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:37.179804  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:37.179872  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:37.205929  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:37.205950  229026 cri.go:89] found id: ""
	I1119 22:34:37.205958  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:37.205997  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.210370  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:37.210444  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:37.236133  229026 cri.go:89] found id: ""
	I1119 22:34:37.236156  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.236167  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:37.236174  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:37.236214  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:37.262353  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:37.262376  229026 cri.go:89] found id: ""
	I1119 22:34:37.262385  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:37.262441  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.265937  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:37.266000  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:37.290077  229026 cri.go:89] found id: ""
	I1119 22:34:37.290098  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.290110  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:37.290117  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:37.290164  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:37.319419  229026 cri.go:89] found id: ""
	I1119 22:34:37.319450  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.319460  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:37.319471  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:37.319482  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:37.345953  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:37.345976  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:37.400020  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:37.400046  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:37.430213  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:37.430235  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:37.524121  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:37.524145  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:37.537558  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:37.537581  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:37.595781  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:34:37.595828  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:37.595844  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:37.627780  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:37.627843  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.185908  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:40.186311  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:40.186357  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:40.186404  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:40.220121  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:40.220146  229026 cri.go:89] found id: ""
	I1119 22:34:40.220157  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:40.220214  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.224985  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:40.225047  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:40.261312  229026 cri.go:89] found id: ""
	I1119 22:34:40.261335  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.261344  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:40.261351  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:40.261431  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:40.297591  229026 cri.go:89] found id: ""
	I1119 22:34:40.297635  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.297646  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:40.297654  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:40.297722  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:40.337446  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.337482  229026 cri.go:89] found id: ""
	I1119 22:34:40.337492  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:40.337546  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.342719  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:40.342786  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:40.373745  229026 cri.go:89] found id: ""
	I1119 22:34:40.373807  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.373849  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:40.373868  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:40.373953  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:40.412795  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:40.412900  229026 cri.go:89] found id: ""
	I1119 22:34:40.412913  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:40.413150  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.418705  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:40.418809  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:40.465960  229026 cri.go:89] found id: ""
	I1119 22:34:40.465984  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.465993  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:40.466000  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:40.466057  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:40.504267  229026 cri.go:89] found id: ""
	I1119 22:34:40.504302  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.504312  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:40.504323  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:40.504337  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.584015  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:40.584054  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:40.621545  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:40.621610  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:40.691569  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:40.691595  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:40.727716  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:40.727781  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:40.860310  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:40.860338  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:40.875366  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:40.875392  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:40.933153  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:34:40.933174  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:40.933190  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:40.425251  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:34:40.425307  274229 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:34:40.425373  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.450381  274229 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:40.450443  274229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:34:40.450498  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.465605  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.471214  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.480709  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.552184  274229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:40.565175  274229 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:34:40.565246  274229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:34:40.580208  274229 api_server.go:72] duration metric: took 189.242933ms to wait for apiserver process to appear ...
	I1119 22:34:40.580230  274229 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:34:40.580246  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:40.585735  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:40.588059  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:34:40.588076  274229 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:34:40.591600  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:40.608904  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:34:40.609009  274229 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:34:40.627861  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:34:40.627885  274229 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 22:34:40.647791  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:34:40.647810  274229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:34:40.668060  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:34:40.668081  274229 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:34:40.683240  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:34:40.683259  274229 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:34:40.695552  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:34:40.695625  274229 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:34:40.711790  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:34:40.711809  274229 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:34:40.726528  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:34:40.726545  274229 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:34:40.739660  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:34:42.105661  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:34:42.105695  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:34:42.105714  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:42.112858  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:34:42.112885  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:34:42.581271  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:42.585719  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:34:42.585741  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:34:42.589179  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.003414434s)
	I1119 22:34:42.589245  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.997622304s)
	I1119 22:34:42.589349  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.849664457s)
	I1119 22:34:42.590952  274229 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-949690 addons enable metrics-server
	
	I1119 22:34:42.600811  274229 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1119 22:34:37.944754  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:39.945116  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:41.945409  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	I1119 22:34:42.601888  274229 addons.go:515] duration metric: took 2.211028002s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 22:34:43.081164  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:43.086527  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:34:43.086560  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:34:43.581251  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:43.586009  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 22:34:43.587182  274229 api_server.go:141] control plane version: v1.34.1
	I1119 22:34:43.587213  274229 api_server.go:131] duration metric: took 3.006975576s to wait for apiserver health ...
	I1119 22:34:43.587226  274229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:34:43.591079  274229 system_pods.go:59] 8 kube-system pods found
	I1119 22:34:43.591111  274229 system_pods.go:61] "coredns-66bc5c9577-wjbzn" [be4fac81-534c-4a17-b208-8ad44d7e9504] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:34:43.591123  274229 system_pods.go:61] "etcd-newest-cni-949690" [77f0100c-0902-434d-9782-9ff8d579d2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:34:43.591137  274229 system_pods.go:61] "kindnet-fw45d" [b409ae83-4d6c-42a0-a436-2159f75e1458] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 22:34:43.591151  274229 system_pods.go:61] "kube-apiserver-newest-cni-949690" [8dce48d6-c1e0-4cae-a68a-c5dbf4a62adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:34:43.591162  274229 system_pods.go:61] "kube-controller-manager-newest-cni-949690" [f61aadf5-fe6a-4566-a44e-f98c9b09b812] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:34:43.591180  274229 system_pods.go:61] "kube-proxy-f98bb" [391d2f06-e215-4d11-a63e-36749e0fdf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 22:34:43.591189  274229 system_pods.go:61] "kube-scheduler-newest-cni-949690" [04596963-6c61-45c1-bbcb-59e57760f2b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:34:43.591199  274229 system_pods.go:61] "storage-provisioner" [11651cac-2eb3-47f8-be2c-b30375bc4461] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:34:43.591207  274229 system_pods.go:74] duration metric: took 3.971817ms to wait for pod list to return data ...
	I1119 22:34:43.591213  274229 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:34:43.593694  274229 default_sa.go:45] found service account: "default"
	I1119 22:34:43.593711  274229 default_sa.go:55] duration metric: took 2.491157ms for default service account to be created ...
	I1119 22:34:43.593720  274229 kubeadm.go:587] duration metric: took 3.202759307s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:34:43.593733  274229 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:34:43.596317  274229 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:34:43.596343  274229 node_conditions.go:123] node cpu capacity is 8
	I1119 22:34:43.596358  274229 node_conditions.go:105] duration metric: took 2.619744ms to run NodePressure ...
	I1119 22:34:43.596372  274229 start.go:242] waiting for startup goroutines ...
	I1119 22:34:43.596385  274229 start.go:247] waiting for cluster config update ...
	I1119 22:34:43.596400  274229 start.go:256] writing updated cluster config ...
	I1119 22:34:43.596668  274229 ssh_runner.go:195] Run: rm -f paused
	I1119 22:34:43.652235  274229 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:34:43.654935  274229 out.go:179] * Done! kubectl is now configured to use "newest-cni-949690" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 22:34:33 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:33.283126042Z" level=info msg="Starting container: 3e078ff2389e86bacc73118fd9620ad5b7e2aa25719a6cebb214d6724d5eb185" id=f3601544-946d-4ec1-aef7-4225e19a0756 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:33 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:33.285226849Z" level=info msg="Started container" PID=1868 containerID=3e078ff2389e86bacc73118fd9620ad5b7e2aa25719a6cebb214d6724d5eb185 description=kube-system/coredns-66bc5c9577-jv7mb/coredns id=f3601544-946d-4ec1-aef7-4225e19a0756 name=/runtime.v1.RuntimeService/StartContainer sandboxID=756ac173dba55fd5f6b7494c04c9c541a9b21a31cf367552dea556ccf29295e2
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.192544596Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c9df45eb-08e9-44d5-a344-49d7561b7511 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.192623473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.19703028Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8b63e1eab23fc4dbc56b970b1c32b86fa001f5bc754b637b7e768d64a1bd6827 UID:39ac96a7-8375-46cb-869f-436b0889fd78 NetNS:/var/run/netns/8afb3d23-62b6-496a-9c6b-3861d2a569b5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000280598}] Aliases:map[]}"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.197060472Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.212199842Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8b63e1eab23fc4dbc56b970b1c32b86fa001f5bc754b637b7e768d64a1bd6827 UID:39ac96a7-8375-46cb-869f-436b0889fd78 NetNS:/var/run/netns/8afb3d23-62b6-496a-9c6b-3861d2a569b5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000280598}] Aliases:map[]}"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.212311093Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.213687137Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.215287821Z" level=info msg="Ran pod sandbox 8b63e1eab23fc4dbc56b970b1c32b86fa001f5bc754b637b7e768d64a1bd6827 with infra container: default/busybox/POD" id=c9df45eb-08e9-44d5-a344-49d7561b7511 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.216349866Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=de726ef4-c986-4250-9982-64dbcf5b2c5a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.216472815Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=de726ef4-c986-4250-9982-64dbcf5b2c5a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.216539126Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=de726ef4-c986-4250-9982-64dbcf5b2c5a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.217227842Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=47a8657a-ef62-4a3c-aea0-75b8feaa896a name=/runtime.v1.ImageService/PullImage
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.220072345Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.866234445Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=47a8657a-ef62-4a3c-aea0-75b8feaa896a name=/runtime.v1.ImageService/PullImage
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.866987689Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8a59da67-2f80-43e8-a211-461cc5087024 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.868366458Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b278d7d3-c25b-4de8-b68e-fb309147334d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.871888224Z" level=info msg="Creating container: default/busybox/busybox" id=75efcdf5-2f1a-4c81-8737-91c47ffe9375 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.872015409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.876066319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.876621875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.906423944Z" level=info msg="Created container 5b72b58c9c741b91f3d905645b480352b6d93970a0bca04a228a6f920eff52b4: default/busybox/busybox" id=75efcdf5-2f1a-4c81-8737-91c47ffe9375 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.907045782Z" level=info msg="Starting container: 5b72b58c9c741b91f3d905645b480352b6d93970a0bca04a228a6f920eff52b4" id=30484c87-a7d6-44c7-ae6a-e8c870a41437 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:36 default-k8s-diff-port-409987 crio[781]: time="2025-11-19T22:34:36.909020817Z" level=info msg="Started container" PID=1939 containerID=5b72b58c9c741b91f3d905645b480352b6d93970a0bca04a228a6f920eff52b4 description=default/busybox/busybox id=30484c87-a7d6-44c7-ae6a-e8c870a41437 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b63e1eab23fc4dbc56b970b1c32b86fa001f5bc754b637b7e768d64a1bd6827
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	5b72b58c9c741       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   8b63e1eab23fc       busybox                                                default
	3e078ff2389e8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago       Running             coredns                   0                   756ac173dba55       coredns-66bc5c9577-jv7mb                               kube-system
	b3e291b4e0f60       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago       Running             storage-provisioner       0                   2a30da9653050       storage-provisioner                                    kube-system
	d922a10e78b4d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      52 seconds ago       Running             kube-proxy                0                   d4c3f5e221bb4       kube-proxy-ph6ff                                       kube-system
	217a8d91a35dd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      53 seconds ago       Running             kindnet-cni               0                   be9ec9b111224       kindnet-8ks5v                                          kube-system
	49e912d83cd45       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   a3f6886deefbb       kube-apiserver-default-k8s-diff-port-409987            kube-system
	5432f69f57f93       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   83d21bd87e437       etcd-default-k8s-diff-port-409987                      kube-system
	7bdf30bda6366       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   67d54e41159ba       kube-controller-manager-default-k8s-diff-port-409987   kube-system
	43b76f537271e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   8e2008e749577       kube-scheduler-default-k8s-diff-port-409987            kube-system
	
	
	==> coredns [3e078ff2389e86bacc73118fd9620ad5b7e2aa25719a6cebb214d6724d5eb185] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54918 - 7532 "HINFO IN 1082510843256649220.6419396059492929371. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.136362352s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-409987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-409987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-409987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_33_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-409987
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:34:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:34:32 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:34:32 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:34:32 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:34:32 +0000   Wed, 19 Nov 2025 22:34:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-409987
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                d18d242d-a2ed-4a8e-863e-f45978b5a25d
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-jv7mb                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     53s
	  kube-system                 etcd-default-k8s-diff-port-409987                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-8ks5v                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-409987             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-409987    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-ph6ff                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-409987             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 52s   kube-proxy       
	  Normal  Starting                 59s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s   node-controller  Node default-k8s-diff-port-409987 event: Registered Node default-k8s-diff-port-409987 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-409987 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [5432f69f57f939911556db4b3624b7b6c14da6088e2e2019e1af7df914612090] <==
	{"level":"warn","ts":"2025-11-19T22:33:43.445480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.452703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.459284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.465232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.471575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.477317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.483465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.492973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.499611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.506354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.512184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.520909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.526997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.534362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.540974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.548272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.554383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.560768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.577007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.580330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.586923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.593988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:33:43.645679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:33:54.913505Z","caller":"traceutil/trace.go:172","msg":"trace[1937662927] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"127.416363ms","start":"2025-11-19T22:33:54.786072Z","end":"2025-11-19T22:33:54.913489Z","steps":["trace[1937662927] 'process raft request'  (duration: 122.637373ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:33:55.859458Z","caller":"traceutil/trace.go:172","msg":"trace[883202287] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"130.957585ms","start":"2025-11-19T22:33:55.728483Z","end":"2025-11-19T22:33:55.859440Z","steps":["trace[883202287] 'process raft request'  (duration: 124.410671ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:34:45 up  1:17,  0 user,  load average: 2.76, 2.76, 1.90
	Linux default-k8s-diff-port-409987 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [217a8d91a35dd9ce3005c00360b7984101e44e3cd469dac4a29b16bf6758b078] <==
	I1119 22:33:52.331874       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:33:52.332268       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:33:52.332450       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:33:52.332471       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:33:52.332496       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:33:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:33:52.629213       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:33:52.629363       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:33:52.629387       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:33:52.630518       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1119 22:34:22.630388       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1119 22:34:22.630391       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1119 22:34:22.630391       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1119 22:34:22.630562       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1119 22:34:24.129978       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:34:24.130061       1 metrics.go:72] Registering metrics
	I1119 22:34:24.130269       1 controller.go:711] "Syncing nftables rules"
	I1119 22:34:32.635902       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:34:32.635947       1 main.go:301] handling current node
	I1119 22:34:42.631145       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:34:42.631173       1 main.go:301] handling current node
	
	
	==> kube-apiserver [49e912d83cd4556eb2a657880006af2773f4de66d9ce7d505958941f75621739] <==
	E1119 22:33:44.170888       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1119 22:33:44.192523       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:33:44.195202       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:33:44.195264       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1119 22:33:44.201479       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:33:44.201541       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:33:44.374541       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:33:44.994862       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1119 22:33:44.998499       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1119 22:33:44.998519       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:33:45.437417       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:33:45.471735       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:33:45.599147       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1119 22:33:45.604937       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1119 22:33:45.605866       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:33:45.609857       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:33:46.032440       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:33:46.609670       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:33:46.617744       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1119 22:33:46.625167       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1119 22:33:51.035219       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:33:51.039706       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:33:51.687244       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:33:51.738135       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1119 22:34:43.997143       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:45530: use of closed network connection
	
	
	==> kube-controller-manager [7bdf30bda6366d08094743e0574543426d22801966d3d5a13c7a9b262c2b4a94] <==
	I1119 22:33:50.999842       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:33:51.004911       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-409987" podCIDRs=["10.244.0.0/24"]
	I1119 22:33:51.030256       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:33:51.031396       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:33:51.031469       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:33:51.031497       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:33:51.031518       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1119 22:33:51.031538       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:33:51.032697       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:33:51.032719       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:33:51.032744       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1119 22:33:51.032777       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:33:51.032802       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:33:51.032825       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:33:51.033046       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:33:51.034037       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:33:51.035578       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1119 22:33:51.036406       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1119 22:33:51.037384       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:33:51.040759       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:33:51.042880       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:33:51.049232       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:33:51.055488       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:33:51.058789       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:35.989480       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d922a10e78b4d58e6a61f0684f3983a8de6463a9c4299e37c1af37eee73e6894] <==
	I1119 22:33:52.764275       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:33:52.833721       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:33:52.934588       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:33:52.934623       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:33:52.934716       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:33:52.952875       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:33:52.952929       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:33:52.958605       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:33:52.959125       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:33:52.959164       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:33:52.960322       1 config.go:200] "Starting service config controller"
	I1119 22:33:52.960345       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:33:52.960393       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:33:52.960412       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:33:52.960452       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:33:52.960461       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:33:52.960533       1 config.go:309] "Starting node config controller"
	I1119 22:33:52.960580       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:33:52.960618       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:33:53.061305       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:33:53.061305       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:33:53.061331       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [43b76f537271ec13e5030c6e2b01d499499aab59da0b605b2947140d401c76f2] <==
	E1119 22:33:44.041240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:33:44.041289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:33:44.041568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1119 22:33:44.041670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:33:44.042001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:33:44.042017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:33:44.042088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:33:44.042124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:33:44.042178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:33:44.042250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:33:44.042537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:33:44.042544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:33:44.042580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:33:44.042602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:33:44.042659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:33:44.042762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:33:44.042735       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:33:44.869238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:33:44.876176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:33:44.877228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:33:44.993764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:33:45.038631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:33:45.165365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:33:45.170436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1119 22:33:45.639642       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:33:47 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:47.527383    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-409987" podStartSLOduration=1.527358191 podStartE2EDuration="1.527358191s" podCreationTimestamp="2025-11-19 22:33:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:33:47.509679183 +0000 UTC m=+1.149699762" watchObservedRunningTime="2025-11-19 22:33:47.527358191 +0000 UTC m=+1.167378731"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.076962    1330 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.077676    1330 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.773937    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb480349-c2e4-4b19-b60f-509c6fed52fc-kube-proxy\") pod \"kube-proxy-ph6ff\" (UID: \"bb480349-c2e4-4b19-b60f-509c6fed52fc\") " pod="kube-system/kube-proxy-ph6ff"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.774000    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s685h\" (UniqueName: \"kubernetes.io/projected/bb480349-c2e4-4b19-b60f-509c6fed52fc-kube-api-access-s685h\") pod \"kube-proxy-ph6ff\" (UID: \"bb480349-c2e4-4b19-b60f-509c6fed52fc\") " pod="kube-system/kube-proxy-ph6ff"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.774034    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb480349-c2e4-4b19-b60f-509c6fed52fc-xtables-lock\") pod \"kube-proxy-ph6ff\" (UID: \"bb480349-c2e4-4b19-b60f-509c6fed52fc\") " pod="kube-system/kube-proxy-ph6ff"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.774056    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb480349-c2e4-4b19-b60f-509c6fed52fc-lib-modules\") pod \"kube-proxy-ph6ff\" (UID: \"bb480349-c2e4-4b19-b60f-509c6fed52fc\") " pod="kube-system/kube-proxy-ph6ff"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.874891    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81448556-cfe0-4028-b73f-90d9da973381-lib-modules\") pod \"kindnet-8ks5v\" (UID: \"81448556-cfe0-4028-b73f-90d9da973381\") " pod="kube-system/kindnet-8ks5v"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.875613    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4585m\" (UniqueName: \"kubernetes.io/projected/81448556-cfe0-4028-b73f-90d9da973381-kube-api-access-4585m\") pod \"kindnet-8ks5v\" (UID: \"81448556-cfe0-4028-b73f-90d9da973381\") " pod="kube-system/kindnet-8ks5v"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.875663    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/81448556-cfe0-4028-b73f-90d9da973381-cni-cfg\") pod \"kindnet-8ks5v\" (UID: \"81448556-cfe0-4028-b73f-90d9da973381\") " pod="kube-system/kindnet-8ks5v"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:51.875716    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81448556-cfe0-4028-b73f-90d9da973381-xtables-lock\") pod \"kindnet-8ks5v\" (UID: \"81448556-cfe0-4028-b73f-90d9da973381\") " pod="kube-system/kindnet-8ks5v"
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: E1119 22:33:51.887217    1330 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: E1119 22:33:51.887254    1330 projected.go:196] Error preparing data for projected volume kube-api-access-s685h for pod kube-system/kube-proxy-ph6ff: configmap "kube-root-ca.crt" not found
	Nov 19 22:33:51 default-k8s-diff-port-409987 kubelet[1330]: E1119 22:33:51.887354    1330 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bb480349-c2e4-4b19-b60f-509c6fed52fc-kube-api-access-s685h podName:bb480349-c2e4-4b19-b60f-509c6fed52fc nodeName:}" failed. No retries permitted until 2025-11-19 22:33:52.387322998 +0000 UTC m=+6.027343538 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s685h" (UniqueName: "kubernetes.io/projected/bb480349-c2e4-4b19-b60f-509c6fed52fc-kube-api-access-s685h") pod "kube-proxy-ph6ff" (UID: "bb480349-c2e4-4b19-b60f-509c6fed52fc") : configmap "kube-root-ca.crt" not found
	Nov 19 22:33:52 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:52.499909    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8ks5v" podStartSLOduration=1.499889901 podStartE2EDuration="1.499889901s" podCreationTimestamp="2025-11-19 22:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:33:52.49971833 +0000 UTC m=+6.139738868" watchObservedRunningTime="2025-11-19 22:33:52.499889901 +0000 UTC m=+6.139910441"
	Nov 19 22:33:54 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:33:54.914901    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ph6ff" podStartSLOduration=3.914877896 podStartE2EDuration="3.914877896s" podCreationTimestamp="2025-11-19 22:33:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:33:53.501098123 +0000 UTC m=+7.141118665" watchObservedRunningTime="2025-11-19 22:33:54.914877896 +0000 UTC m=+8.554898434"
	Nov 19 22:34:32 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:32.732632    1330 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 19 22:34:32 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:32.966514    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/757d30b7-6575-4017-8ba6-dc22bcdf6d50-config-volume\") pod \"coredns-66bc5c9577-jv7mb\" (UID: \"757d30b7-6575-4017-8ba6-dc22bcdf6d50\") " pod="kube-system/coredns-66bc5c9577-jv7mb"
	Nov 19 22:34:32 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:32.966580    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b-tmp\") pod \"storage-provisioner\" (UID: \"47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b\") " pod="kube-system/storage-provisioner"
	Nov 19 22:34:32 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:32.966610    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm96c\" (UniqueName: \"kubernetes.io/projected/757d30b7-6575-4017-8ba6-dc22bcdf6d50-kube-api-access-gm96c\") pod \"coredns-66bc5c9577-jv7mb\" (UID: \"757d30b7-6575-4017-8ba6-dc22bcdf6d50\") " pod="kube-system/coredns-66bc5c9577-jv7mb"
	Nov 19 22:34:32 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:32.966636    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgd8\" (UniqueName: \"kubernetes.io/projected/47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b-kube-api-access-dzgd8\") pod \"storage-provisioner\" (UID: \"47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b\") " pod="kube-system/storage-provisioner"
	Nov 19 22:34:33 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:33.593121    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.593097637 podStartE2EDuration="41.593097637s" podCreationTimestamp="2025-11-19 22:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:34:33.581691389 +0000 UTC m=+47.221711944" watchObservedRunningTime="2025-11-19 22:34:33.593097637 +0000 UTC m=+47.233118175"
	Nov 19 22:34:35 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:35.887030    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jv7mb" podStartSLOduration=43.88700181 podStartE2EDuration="43.88700181s" podCreationTimestamp="2025-11-19 22:33:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-19 22:34:33.593320752 +0000 UTC m=+47.233341288" watchObservedRunningTime="2025-11-19 22:34:35.88700181 +0000 UTC m=+49.527022351"
	Nov 19 22:34:35 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:35.985731    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6rnk\" (UniqueName: \"kubernetes.io/projected/39ac96a7-8375-46cb-869f-436b0889fd78-kube-api-access-g6rnk\") pod \"busybox\" (UID: \"39ac96a7-8375-46cb-869f-436b0889fd78\") " pod="default/busybox"
	Nov 19 22:34:37 default-k8s-diff-port-409987 kubelet[1330]: I1119 22:34:37.593307    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.942259481 podStartE2EDuration="2.593283076s" podCreationTimestamp="2025-11-19 22:34:35 +0000 UTC" firstStartedPulling="2025-11-19 22:34:36.216868716 +0000 UTC m=+49.856889233" lastFinishedPulling="2025-11-19 22:34:36.867892115 +0000 UTC m=+50.507912828" observedRunningTime="2025-11-19 22:34:37.592712995 +0000 UTC m=+51.232733552" watchObservedRunningTime="2025-11-19 22:34:37.593283076 +0000 UTC m=+51.233303615"
	
	
	==> storage-provisioner [b3e291b4e0f60fbc40065200177b8a759546a9e06d4fe2f0eb82420728f00905] <==
	I1119 22:34:33.265489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:34:33.274373       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:34:33.274481       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:34:33.277152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:33.283616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:34:33.283843       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:34:33.283963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"238e7196-679c-4ac3-8e69-1a8c292573ac", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-409987_b0d9a09a-3af0-4cce-83b7-e5fb6de7ae1e became leader
	I1119 22:34:33.283994       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409987_b0d9a09a-3af0-4cce-83b7-e5fb6de7ae1e!
	W1119 22:34:33.289226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:33.292696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:34:33.385055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409987_b0d9a09a-3af0-4cce-83b7-e5fb6de7ae1e!
	W1119 22:34:35.295691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:35.299522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:37.302655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:37.306499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:39.309756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:39.313805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:41.317183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:41.324593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:43.328010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:43.331977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:45.335412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:45.339165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-409987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-949690 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-949690 --alsologtostderr -v=1: exit status 80 (1.557414811s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-949690 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:34:44.329251  277092 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:34:44.329540  277092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:44.329554  277092 out.go:374] Setting ErrFile to fd 2...
	I1119 22:34:44.329560  277092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:44.329892  277092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:34:44.330208  277092 out.go:368] Setting JSON to false
	I1119 22:34:44.330253  277092 mustload.go:66] Loading cluster: newest-cni-949690
	I1119 22:34:44.330730  277092 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:44.331324  277092 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:44.353878  277092 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:44.354181  277092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:44.422223  277092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-19 22:34:44.411709533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:44.422843  277092 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-949690 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:34:44.424539  277092 out.go:179] * Pausing node newest-cni-949690 ... 
	I1119 22:34:44.425714  277092 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:44.425985  277092 ssh_runner.go:195] Run: systemctl --version
	I1119 22:34:44.426021  277092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:44.446809  277092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:44.540947  277092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:34:44.552982  277092 pause.go:52] kubelet running: true
	I1119 22:34:44.553040  277092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:34:44.693742  277092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:34:44.693841  277092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:34:44.763565  277092 cri.go:89] found id: "5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75"
	I1119 22:34:44.763583  277092 cri.go:89] found id: "a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb"
	I1119 22:34:44.763586  277092 cri.go:89] found id: "161338ce75f2c0c31e0d4eff895db0725cce94686d5b7734a4417e80450100be"
	I1119 22:34:44.763590  277092 cri.go:89] found id: "e0bf6ee50782a92838fb4803387070f4bb8156a7272beb735ace68c4880b3665"
	I1119 22:34:44.763592  277092 cri.go:89] found id: "272ca1f3b39d6be33be2578290a83acd236c28ec785c58e870270aa855c550ea"
	I1119 22:34:44.763596  277092 cri.go:89] found id: "10b10591e7bf4e79373ab1993679cf4bedd7cef09257b365452ef6e96fcd19eb"
	I1119 22:34:44.763600  277092 cri.go:89] found id: ""
	I1119 22:34:44.763642  277092 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:34:44.776932  277092 retry.go:31] will retry after 331.575153ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:44Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:34:45.109506  277092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:34:45.122147  277092 pause.go:52] kubelet running: false
	I1119 22:34:45.122209  277092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:34:45.246122  277092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:34:45.246199  277092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:34:45.313740  277092 cri.go:89] found id: "5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75"
	I1119 22:34:45.313766  277092 cri.go:89] found id: "a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb"
	I1119 22:34:45.313773  277092 cri.go:89] found id: "161338ce75f2c0c31e0d4eff895db0725cce94686d5b7734a4417e80450100be"
	I1119 22:34:45.313777  277092 cri.go:89] found id: "e0bf6ee50782a92838fb4803387070f4bb8156a7272beb735ace68c4880b3665"
	I1119 22:34:45.313781  277092 cri.go:89] found id: "272ca1f3b39d6be33be2578290a83acd236c28ec785c58e870270aa855c550ea"
	I1119 22:34:45.313784  277092 cri.go:89] found id: "10b10591e7bf4e79373ab1993679cf4bedd7cef09257b365452ef6e96fcd19eb"
	I1119 22:34:45.313788  277092 cri.go:89] found id: ""
	I1119 22:34:45.313860  277092 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:34:45.325551  277092 retry.go:31] will retry after 247.704534ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:45Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:34:45.573981  277092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:34:45.586971  277092 pause.go:52] kubelet running: false
	I1119 22:34:45.587035  277092 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:34:45.714124  277092 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:34:45.714228  277092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:34:45.791334  277092 cri.go:89] found id: "5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75"
	I1119 22:34:45.791356  277092 cri.go:89] found id: "a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb"
	I1119 22:34:45.791363  277092 cri.go:89] found id: "161338ce75f2c0c31e0d4eff895db0725cce94686d5b7734a4417e80450100be"
	I1119 22:34:45.791368  277092 cri.go:89] found id: "e0bf6ee50782a92838fb4803387070f4bb8156a7272beb735ace68c4880b3665"
	I1119 22:34:45.791372  277092 cri.go:89] found id: "272ca1f3b39d6be33be2578290a83acd236c28ec785c58e870270aa855c550ea"
	I1119 22:34:45.791378  277092 cri.go:89] found id: "10b10591e7bf4e79373ab1993679cf4bedd7cef09257b365452ef6e96fcd19eb"
	I1119 22:34:45.791382  277092 cri.go:89] found id: ""
	I1119 22:34:45.791425  277092 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:34:45.806072  277092 out.go:203] 
	W1119 22:34:45.807304  277092 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:34:45.807321  277092 out.go:285] * 
	* 
	W1119 22:34:45.814443  277092 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:34:45.815567  277092 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-949690 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-949690
helpers_test.go:243: (dbg) docker inspect newest-cni-949690:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0",
	        "Created": "2025-11-19T22:33:56.785605734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274455,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:34:33.550080526Z",
	            "FinishedAt": "2025-11-19T22:34:32.288924029Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/hostname",
	        "HostsPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/hosts",
	        "LogPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0-json.log",
	        "Name": "/newest-cni-949690",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-949690:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-949690",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0",
	                "LowerDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-949690",
	                "Source": "/var/lib/docker/volumes/newest-cni-949690/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-949690",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-949690",
	                "name.minikube.sigs.k8s.io": "newest-cni-949690",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1e11e584732a7423eed2a0b8bf2c915eda23fba5c63fabe4b6eed7f1411096a6",
	            "SandboxKey": "/var/run/docker/netns/1e11e584732a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-949690": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f9b0cf1aef5acfa5bdc194747c88b940e5b4d3be9960af2a5c8a6c56975f9e3f",
	                    "EndpointID": "6603ee62a8870508ff4af3e3fe4beeab2c9a2b8091dc76c0e2cabf2317e5e04e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "8e:59:4f:0b:00:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-949690",
	                        "00eedca978ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-949690 -n newest-cni-949690
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-949690 -n newest-cni-949690: exit status 2 (344.447856ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-949690 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p disable-driver-mounts-726490                                                                                                                                                                                                               │ disable-driver-mounts-726490 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ stop    │ -p embed-certs-443380 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-443380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p newest-cni-949690 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-949690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-409987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ image   │ newest-cni-949690 image list --format=json                                                                                                                                                                                                    │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ pause   │ -p newest-cni-949690 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-409987 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:34:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:34:33.330295  274229 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:34:33.330411  274229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:33.330420  274229 out.go:374] Setting ErrFile to fd 2...
	I1119 22:34:33.330425  274229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:33.330632  274229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:34:33.331098  274229 out.go:368] Setting JSON to false
	I1119 22:34:33.332209  274229 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4621,"bootTime":1763587052,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:34:33.332295  274229 start.go:143] virtualization: kvm guest
	I1119 22:34:33.334075  274229 out.go:179] * [newest-cni-949690] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:34:33.335316  274229 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:34:33.335334  274229 notify.go:221] Checking for updates...
	I1119 22:34:33.337262  274229 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:34:33.338454  274229 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:33.339494  274229 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:34:33.340628  274229 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:34:33.341750  274229 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:34:33.343306  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:33.343856  274229 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:34:33.368362  274229 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:34:33.368450  274229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:33.423361  274229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:34:33.414091828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:33.423470  274229 docker.go:319] overlay module found
	I1119 22:34:33.425102  274229 out.go:179] * Using the docker driver based on existing profile
	I1119 22:34:33.426204  274229 start.go:309] selected driver: docker
	I1119 22:34:33.426217  274229 start.go:930] validating driver "docker" against &{Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:33.426303  274229 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:34:33.427062  274229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:33.482572  274229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:34:33.473412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:33.482955  274229 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:34:33.482993  274229 cni.go:84] Creating CNI manager for ""
	I1119 22:34:33.483056  274229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:33.483099  274229 start.go:353] cluster config:
	{Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:33.484922  274229 out.go:179] * Starting "newest-cni-949690" primary control-plane node in "newest-cni-949690" cluster
	I1119 22:34:33.486015  274229 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:34:33.487084  274229 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:34:33.487995  274229 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:34:33.488024  274229 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:34:33.488046  274229 cache.go:65] Caching tarball of preloaded images
	I1119 22:34:33.488079  274229 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:34:33.488138  274229 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:34:33.488153  274229 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:34:33.488259  274229 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/config.json ...
	I1119 22:34:33.507602  274229 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:34:33.507617  274229 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:34:33.507631  274229 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:34:33.507654  274229 start.go:360] acquireMachinesLock for newest-cni-949690: {Name:mk317921465b37fc459423448fcaa153e30f6967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:34:33.507709  274229 start.go:364] duration metric: took 39.568µs to acquireMachinesLock for "newest-cni-949690"
	I1119 22:34:33.507725  274229 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:34:33.507730  274229 fix.go:54] fixHost starting: 
	I1119 22:34:33.507951  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:33.524445  274229 fix.go:112] recreateIfNeeded on newest-cni-949690: state=Stopped err=<nil>
	W1119 22:34:33.524473  274229 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 22:34:29.505783  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	W1119 22:34:31.506161  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	I1119 22:34:33.011714  257842 node_ready.go:49] node "default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:33.011742  257842 node_ready.go:38] duration metric: took 41.008374378s for node "default-k8s-diff-port-409987" to be "Ready" ...
	I1119 22:34:33.011757  257842 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:34:33.011802  257842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:34:33.024549  257842 api_server.go:72] duration metric: took 41.352426943s to wait for apiserver process to appear ...
	I1119 22:34:33.024573  257842 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:34:33.024593  257842 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:34:33.029923  257842 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1119 22:34:33.031006  257842 api_server.go:141] control plane version: v1.34.1
	I1119 22:34:33.031027  257842 api_server.go:131] duration metric: took 6.447983ms to wait for apiserver health ...
	I1119 22:34:33.031036  257842 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:34:33.034211  257842 system_pods.go:59] 8 kube-system pods found
	I1119 22:34:33.034250  257842 system_pods.go:61] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.034260  257842 system_pods.go:61] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.034272  257842 system_pods.go:61] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.034277  257842 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.034286  257842 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.034295  257842 system_pods.go:61] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.034300  257842 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.034308  257842 system_pods.go:61] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.034318  257842 system_pods.go:74] duration metric: took 3.273983ms to wait for pod list to return data ...
	I1119 22:34:33.034333  257842 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:34:33.036602  257842 default_sa.go:45] found service account: "default"
	I1119 22:34:33.036620  257842 default_sa.go:55] duration metric: took 2.277845ms for default service account to be created ...
	I1119 22:34:33.036630  257842 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:34:33.039135  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.039163  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.039169  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.039175  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.039178  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.039184  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.039191  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.039194  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.039199  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.039218  257842 retry.go:31] will retry after 283.539767ms: missing components: kube-dns
	I1119 22:34:33.329109  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.329139  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.329145  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.329150  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.329154  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.329157  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.329161  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.329164  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.329176  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.329193  257842 retry.go:31] will retry after 250.82065ms: missing components: kube-dns
	I1119 22:34:33.583473  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.583501  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.583507  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.583513  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.583516  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.583520  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.583524  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.583528  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.583531  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Running
	I1119 22:34:33.583545  257842 retry.go:31] will retry after 471.945976ms: missing components: kube-dns
	I1119 22:34:34.059943  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:34.059977  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Running
	I1119 22:34:34.059986  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:34.059993  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:34.059999  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:34.060005  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:34.060011  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:34.060016  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:34.060021  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Running
	I1119 22:34:34.060030  257842 system_pods.go:126] duration metric: took 1.023393605s to wait for k8s-apps to be running ...
	I1119 22:34:34.060042  257842 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:34:34.060088  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:34:34.085046  257842 system_svc.go:56] duration metric: took 24.992513ms WaitForService to wait for kubelet
	I1119 22:34:34.085085  257842 kubeadm.go:587] duration metric: took 42.412965914s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:34:34.085108  257842 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:34:34.088575  257842 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:34:34.088604  257842 node_conditions.go:123] node cpu capacity is 8
	I1119 22:34:34.088620  257842 node_conditions.go:105] duration metric: took 3.505513ms to run NodePressure ...
	I1119 22:34:34.088635  257842 start.go:242] waiting for startup goroutines ...
	I1119 22:34:34.088645  257842 start.go:247] waiting for cluster config update ...
	I1119 22:34:34.088659  257842 start.go:256] writing updated cluster config ...
	I1119 22:34:34.088995  257842 ssh_runner.go:195] Run: rm -f paused
	I1119 22:34:34.093920  257842 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:34:34.097808  257842 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jv7mb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.104292  257842 pod_ready.go:94] pod "coredns-66bc5c9577-jv7mb" is "Ready"
	I1119 22:34:34.104315  257842 pod_ready.go:86] duration metric: took 6.453567ms for pod "coredns-66bc5c9577-jv7mb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.106517  257842 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.110627  257842 pod_ready.go:94] pod "etcd-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.110663  257842 pod_ready.go:86] duration metric: took 4.119698ms for pod "etcd-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.112556  257842 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.116315  257842 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.116335  257842 pod_ready.go:86] duration metric: took 3.757821ms for pod "kube-apiserver-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.118900  257842 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.497369  257842 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.497391  257842 pod_ready.go:86] duration metric: took 378.471441ms for pod "kube-controller-manager-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.698377  257842 pod_ready.go:83] waiting for pod "kube-proxy-ph6ff" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.097763  257842 pod_ready.go:94] pod "kube-proxy-ph6ff" is "Ready"
	I1119 22:34:35.097786  257842 pod_ready.go:86] duration metric: took 399.387132ms for pod "kube-proxy-ph6ff" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.297421  257842 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.697562  257842 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:35.697595  257842 pod_ready.go:86] duration metric: took 400.149921ms for pod "kube-scheduler-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.697609  257842 pod_ready.go:40] duration metric: took 1.60365602s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:34:35.740250  257842 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:34:35.742579  257842 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-409987" cluster and "default" namespace by default
	I1119 22:34:34.015894  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:34.016410  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:34.016473  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:34.016533  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:34.044039  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:34.044060  229026 cri.go:89] found id: ""
	I1119 22:34:34.044070  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:34.044121  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.048072  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:34.048123  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:34.085696  229026 cri.go:89] found id: ""
	I1119 22:34:34.085724  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.085736  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:34.085746  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:34.085851  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:34.120603  229026 cri.go:89] found id: ""
	I1119 22:34:34.120627  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.120636  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:34.120645  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:34.120708  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:34.145396  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:34.145417  229026 cri.go:89] found id: ""
	I1119 22:34:34.145428  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:34.145476  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.149506  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:34.149574  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:34.176649  229026 cri.go:89] found id: ""
	I1119 22:34:34.176674  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.176684  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:34.176691  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:34.176744  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:34.203378  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:34.203395  229026 cri.go:89] found id: ""
	I1119 22:34:34.203402  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:34.203443  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.207412  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:34.207488  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:34.233093  229026 cri.go:89] found id: ""
	I1119 22:34:34.233114  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.233121  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:34.233127  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:34.233168  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:34.259032  229026 cri.go:89] found id: ""
	I1119 22:34:34.259056  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.259065  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:34.259076  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:34.259096  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:34.290407  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:34.290442  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:34.340448  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:34.340475  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:34.366016  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:34.366045  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:34.409566  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:34.409591  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:34.437163  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:34.437189  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:34.530916  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:34.530943  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:34.544403  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:34.544423  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:34.596039  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1119 22:34:33.445596  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:35.944355  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	I1119 22:34:33.525962  274229 out.go:252] * Restarting existing docker container for "newest-cni-949690" ...
	I1119 22:34:33.526026  274229 cli_runner.go:164] Run: docker start newest-cni-949690
	I1119 22:34:33.807804  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:33.826302  274229 kic.go:430] container "newest-cni-949690" state is running.
	I1119 22:34:33.826759  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:33.844694  274229 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/config.json ...
	I1119 22:34:33.844930  274229 machine.go:94] provisionDockerMachine start ...
	I1119 22:34:33.845009  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:33.863360  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:33.863582  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:33.863594  274229 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:34:33.864325  274229 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60156->127.0.0.1:33093: read: connection reset by peer
	I1119 22:34:36.995210  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-949690
	
	I1119 22:34:36.995240  274229 ubuntu.go:182] provisioning hostname "newest-cni-949690"
	I1119 22:34:36.995297  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.013235  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.013489  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.013510  274229 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-949690 && echo "newest-cni-949690" | sudo tee /etc/hostname
	I1119 22:34:37.147228  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-949690
	
	I1119 22:34:37.147327  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.168935  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.169231  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.169259  274229 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-949690' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-949690/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-949690' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:34:37.298184  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:34:37.298215  274229 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:34:37.298261  274229 ubuntu.go:190] setting up certificates
	I1119 22:34:37.298284  274229 provision.go:84] configureAuth start
	I1119 22:34:37.298344  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:37.319706  274229 provision.go:143] copyHostCerts
	I1119 22:34:37.319771  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:34:37.319788  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:34:37.319891  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:34:37.320027  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:34:37.320044  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:34:37.320101  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:34:37.320226  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:34:37.320235  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:34:37.320278  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:34:37.320347  274229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.newest-cni-949690 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-949690]
	I1119 22:34:37.636299  274229 provision.go:177] copyRemoteCerts
	I1119 22:34:37.636353  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:34:37.636390  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.656778  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:37.748239  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:34:37.765194  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:34:37.781622  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:34:37.797963  274229 provision.go:87] duration metric: took 499.66535ms to configureAuth
	I1119 22:34:37.797984  274229 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:34:37.798154  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:37.798258  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.817180  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.817381  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.817398  274229 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:34:38.091892  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:34:38.091918  274229 machine.go:97] duration metric: took 4.246971119s to provisionDockerMachine
	I1119 22:34:38.091933  274229 start.go:293] postStartSetup for "newest-cni-949690" (driver="docker")
	I1119 22:34:38.091945  274229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:34:38.092012  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:34:38.092060  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.109860  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.200247  274229 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:34:38.203527  274229 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:34:38.203577  274229 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:34:38.203589  274229 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:34:38.203630  274229 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:34:38.203698  274229 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:34:38.203800  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:34:38.211127  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:34:38.227108  274229 start.go:296] duration metric: took 135.165199ms for postStartSetup
	I1119 22:34:38.227183  274229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:34:38.227217  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.245993  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.335573  274229 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:34:38.339698  274229 fix.go:56] duration metric: took 4.831963481s for fixHost
	I1119 22:34:38.339720  274229 start.go:83] releasing machines lock for "newest-cni-949690", held for 4.831999371s
	I1119 22:34:38.339779  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:38.357434  274229 ssh_runner.go:195] Run: cat /version.json
	I1119 22:34:38.357469  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.357551  274229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:34:38.357616  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.376364  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.376897  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.536558  274229 ssh_runner.go:195] Run: systemctl --version
	I1119 22:34:38.542682  274229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:34:38.575491  274229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:34:38.579783  274229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:34:38.579849  274229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:34:38.587790  274229 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:34:38.587811  274229 start.go:496] detecting cgroup driver to use...
	I1119 22:34:38.587851  274229 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:34:38.587888  274229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:34:38.601596  274229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:34:38.612897  274229 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:34:38.612941  274229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:34:38.625963  274229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:34:38.637676  274229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:34:38.714790  274229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:34:38.797980  274229 docker.go:234] disabling docker service ...
	I1119 22:34:38.798067  274229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:34:38.811449  274229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:34:38.822900  274229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:34:38.900719  274229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:34:38.974367  274229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:34:38.986294  274229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:34:38.999467  274229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:34:38.999519  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.008010  274229 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:34:39.008056  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.016160  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.024213  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.032158  274229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:34:39.039615  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.047544  274229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.055117  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.063061  274229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:34:39.069737  274229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:34:39.076429  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:39.154204  274229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:34:39.294029  274229 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:34:39.294103  274229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:34:39.297873  274229 start.go:564] Will wait 60s for crictl version
	I1119 22:34:39.297923  274229 ssh_runner.go:195] Run: which crictl
	I1119 22:34:39.301294  274229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:34:39.326952  274229 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:34:39.327014  274229 ssh_runner.go:195] Run: crio --version
	I1119 22:34:39.353361  274229 ssh_runner.go:195] Run: crio --version
	I1119 22:34:39.381895  274229 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:34:39.383022  274229 cli_runner.go:164] Run: docker network inspect newest-cni-949690 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:34:39.401052  274229 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:34:39.404988  274229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:34:39.416039  274229 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 22:34:39.417090  274229 kubeadm.go:884] updating cluster {Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:34:39.417211  274229 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:34:39.417261  274229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:34:39.448367  274229 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:34:39.448387  274229 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:34:39.448438  274229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:34:39.472423  274229 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:34:39.472440  274229 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:34:39.472447  274229 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 22:34:39.472535  274229 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-949690 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:34:39.472590  274229 ssh_runner.go:195] Run: crio config
	I1119 22:34:39.517350  274229 cni.go:84] Creating CNI manager for ""
	I1119 22:34:39.517368  274229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:39.517384  274229 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 22:34:39.517405  274229 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-949690 NodeName:newest-cni-949690 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:34:39.517529  274229 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-949690"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:34:39.517587  274229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:34:39.525082  274229 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:34:39.525134  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:34:39.532362  274229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 22:34:39.544100  274229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:34:39.555705  274229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:34:39.567225  274229 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:34:39.570464  274229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:34:39.579619  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:39.654928  274229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:39.677011  274229 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690 for IP: 192.168.103.2
	I1119 22:34:39.677035  274229 certs.go:195] generating shared ca certs ...
	I1119 22:34:39.677052  274229 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:39.677228  274229 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:34:39.677271  274229 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:34:39.677281  274229 certs.go:257] generating profile certs ...
	I1119 22:34:39.677353  274229 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/client.key
	I1119 22:34:39.677405  274229 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.key.7bbc5920
	I1119 22:34:39.677448  274229 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.key
	I1119 22:34:39.677558  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:34:39.677586  274229 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:34:39.677595  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:34:39.677616  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:34:39.677637  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:34:39.677658  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:34:39.677696  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:34:39.678358  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:34:39.696474  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:34:39.715655  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:34:39.733546  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:34:39.755609  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:34:39.773052  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:34:39.789276  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:34:39.805231  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:34:39.821006  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:34:39.836925  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:34:39.852723  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:34:39.869749  274229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:34:39.881905  274229 ssh_runner.go:195] Run: openssl version
	I1119 22:34:39.887500  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:34:39.895583  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.899048  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.899094  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.932764  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:34:39.939946  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:34:39.948790  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.952241  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.952289  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.986001  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:34:39.993282  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:34:40.001064  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.004497  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.004538  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.038594  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:34:40.046222  274229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:34:40.049695  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:34:40.083766  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:34:40.116704  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:34:40.150004  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:34:40.189464  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:34:40.244982  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:34:40.299485  274229 kubeadm.go:401] StartCluster: {Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:40.299589  274229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:34:40.299646  274229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:34:40.339137  274229 cri.go:89] found id: "161338ce75f2c0c31e0d4eff895db0725cce94686d5b7734a4417e80450100be"
	I1119 22:34:40.339168  274229 cri.go:89] found id: "e0bf6ee50782a92838fb4803387070f4bb8156a7272beb735ace68c4880b3665"
	I1119 22:34:40.339174  274229 cri.go:89] found id: "272ca1f3b39d6be33be2578290a83acd236c28ec785c58e870270aa855c550ea"
	I1119 22:34:40.339179  274229 cri.go:89] found id: "10b10591e7bf4e79373ab1993679cf4bedd7cef09257b365452ef6e96fcd19eb"
	I1119 22:34:40.339183  274229 cri.go:89] found id: ""
	I1119 22:34:40.339228  274229 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:34:40.355618  274229 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:40Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:34:40.355688  274229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:34:40.365585  274229 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:34:40.365604  274229 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:34:40.365646  274229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:34:40.374257  274229 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:34:40.375206  274229 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-949690" does not appear in /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:40.375750  274229 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-9335/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-949690" cluster setting kubeconfig missing "newest-cni-949690" context setting]
	I1119 22:34:40.376658  274229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.378292  274229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:34:40.387104  274229 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1119 22:34:40.387173  274229 kubeadm.go:602] duration metric: took 21.562325ms to restartPrimaryControlPlane
	I1119 22:34:40.387180  274229 kubeadm.go:403] duration metric: took 87.702724ms to StartCluster
	I1119 22:34:40.387230  274229 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.387328  274229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:40.390472  274229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.390929  274229 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:34:40.390865  274229 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:34:40.391006  274229 addons.go:70] Setting default-storageclass=true in profile "newest-cni-949690"
	I1119 22:34:40.391021  274229 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-949690"
	I1119 22:34:40.390986  274229 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-949690"
	I1119 22:34:40.391198  274229 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-949690"
	I1119 22:34:40.391210  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	W1119 22:34:40.391215  274229 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:34:40.391243  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.391358  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.390998  274229 addons.go:70] Setting dashboard=true in profile "newest-cni-949690"
	I1119 22:34:40.391554  274229 addons.go:239] Setting addon dashboard=true in "newest-cni-949690"
	W1119 22:34:40.391568  274229 addons.go:248] addon dashboard should already be in state true
	I1119 22:34:40.391602  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.391769  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.392330  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.393181  274229 out.go:179] * Verifying Kubernetes components...
	I1119 22:34:40.394355  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:40.418872  274229 addons.go:239] Setting addon default-storageclass=true in "newest-cni-949690"
	W1119 22:34:40.418895  274229 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:34:40.418931  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.419558  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.420933  274229 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:34:40.422045  274229 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:40.422131  274229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:34:40.422102  274229 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:34:40.422257  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.424181  274229 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:34:37.096614  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:37.096952  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:37.097004  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:37.097051  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:37.122255  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:37.122270  229026 cri.go:89] found id: ""
	I1119 22:34:37.122277  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:37.122315  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.125982  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:37.126034  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:37.151925  229026 cri.go:89] found id: ""
	I1119 22:34:37.151947  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.151958  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:37.151966  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:37.152013  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:37.179757  229026 cri.go:89] found id: ""
	I1119 22:34:37.179787  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.179796  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:37.179804  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:37.179872  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:37.205929  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:37.205950  229026 cri.go:89] found id: ""
	I1119 22:34:37.205958  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:37.205997  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.210370  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:37.210444  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:37.236133  229026 cri.go:89] found id: ""
	I1119 22:34:37.236156  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.236167  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:37.236174  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:37.236214  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:37.262353  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:37.262376  229026 cri.go:89] found id: ""
	I1119 22:34:37.262385  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:37.262441  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.265937  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:37.266000  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:37.290077  229026 cri.go:89] found id: ""
	I1119 22:34:37.290098  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.290110  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:37.290117  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:37.290164  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:37.319419  229026 cri.go:89] found id: ""
	I1119 22:34:37.319450  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.319460  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:37.319471  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:37.319482  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:37.345953  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:37.345976  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:37.400020  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:37.400046  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:37.430213  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:37.430235  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:37.524121  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:37.524145  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:37.537558  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:37.537581  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:37.595781  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:34:37.595828  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:37.595844  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:37.627780  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:37.627843  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.185908  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:40.186311  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:40.186357  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:40.186404  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:40.220121  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:40.220146  229026 cri.go:89] found id: ""
	I1119 22:34:40.220157  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:40.220214  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.224985  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:40.225047  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:40.261312  229026 cri.go:89] found id: ""
	I1119 22:34:40.261335  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.261344  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:40.261351  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:40.261431  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:40.297591  229026 cri.go:89] found id: ""
	I1119 22:34:40.297635  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.297646  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:40.297654  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:40.297722  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:40.337446  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.337482  229026 cri.go:89] found id: ""
	I1119 22:34:40.337492  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:40.337546  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.342719  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:40.342786  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:40.373745  229026 cri.go:89] found id: ""
	I1119 22:34:40.373807  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.373849  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:40.373868  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:40.373953  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:40.412795  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:40.412900  229026 cri.go:89] found id: ""
	I1119 22:34:40.412913  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:40.413150  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.418705  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:40.418809  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:40.465960  229026 cri.go:89] found id: ""
	I1119 22:34:40.465984  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.465993  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:40.466000  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:40.466057  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:40.504267  229026 cri.go:89] found id: ""
	I1119 22:34:40.504302  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.504312  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:40.504323  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:40.504337  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.584015  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:40.584054  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:40.621545  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:40.621610  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:40.691569  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:40.691595  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:40.727716  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:40.727781  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:40.860310  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:40.860338  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:40.875366  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:40.875392  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:40.933153  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:34:40.933174  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:40.933190  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:40.425251  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:34:40.425307  274229 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:34:40.425373  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.450381  274229 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:40.450443  274229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:34:40.450498  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.465605  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.471214  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.480709  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.552184  274229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:40.565175  274229 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:34:40.565246  274229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:34:40.580208  274229 api_server.go:72] duration metric: took 189.242933ms to wait for apiserver process to appear ...
	I1119 22:34:40.580230  274229 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:34:40.580246  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:40.585735  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:40.588059  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:34:40.588076  274229 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:34:40.591600  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:40.608904  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:34:40.609009  274229 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:34:40.627861  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:34:40.627885  274229 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 22:34:40.647791  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:34:40.647810  274229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:34:40.668060  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:34:40.668081  274229 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:34:40.683240  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:34:40.683259  274229 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:34:40.695552  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:34:40.695625  274229 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:34:40.711790  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:34:40.711809  274229 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:34:40.726528  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:34:40.726545  274229 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:34:40.739660  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:34:42.105661  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:34:42.105695  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:34:42.105714  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:42.112858  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:34:42.112885  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:34:42.581271  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:42.585719  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:34:42.585741  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:34:42.589179  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.003414434s)
	I1119 22:34:42.589245  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.997622304s)
	I1119 22:34:42.589349  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.849664457s)
	I1119 22:34:42.590952  274229 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-949690 addons enable metrics-server
	
	I1119 22:34:42.600811  274229 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1119 22:34:37.944754  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:39.945116  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:41.945409  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	I1119 22:34:42.601888  274229 addons.go:515] duration metric: took 2.211028002s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 22:34:43.081164  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:43.086527  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:34:43.086560  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:34:43.581251  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:43.586009  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 22:34:43.587182  274229 api_server.go:141] control plane version: v1.34.1
	I1119 22:34:43.587213  274229 api_server.go:131] duration metric: took 3.006975576s to wait for apiserver health ...
	I1119 22:34:43.587226  274229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:34:43.591079  274229 system_pods.go:59] 8 kube-system pods found
	I1119 22:34:43.591111  274229 system_pods.go:61] "coredns-66bc5c9577-wjbzn" [be4fac81-534c-4a17-b208-8ad44d7e9504] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:34:43.591123  274229 system_pods.go:61] "etcd-newest-cni-949690" [77f0100c-0902-434d-9782-9ff8d579d2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:34:43.591137  274229 system_pods.go:61] "kindnet-fw45d" [b409ae83-4d6c-42a0-a436-2159f75e1458] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 22:34:43.591151  274229 system_pods.go:61] "kube-apiserver-newest-cni-949690" [8dce48d6-c1e0-4cae-a68a-c5dbf4a62adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:34:43.591162  274229 system_pods.go:61] "kube-controller-manager-newest-cni-949690" [f61aadf5-fe6a-4566-a44e-f98c9b09b812] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:34:43.591180  274229 system_pods.go:61] "kube-proxy-f98bb" [391d2f06-e215-4d11-a63e-36749e0fdf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 22:34:43.591189  274229 system_pods.go:61] "kube-scheduler-newest-cni-949690" [04596963-6c61-45c1-bbcb-59e57760f2b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:34:43.591199  274229 system_pods.go:61] "storage-provisioner" [11651cac-2eb3-47f8-be2c-b30375bc4461] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:34:43.591207  274229 system_pods.go:74] duration metric: took 3.971817ms to wait for pod list to return data ...
	I1119 22:34:43.591213  274229 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:34:43.593694  274229 default_sa.go:45] found service account: "default"
	I1119 22:34:43.593711  274229 default_sa.go:55] duration metric: took 2.491157ms for default service account to be created ...
	I1119 22:34:43.593720  274229 kubeadm.go:587] duration metric: took 3.202759307s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:34:43.593733  274229 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:34:43.596317  274229 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:34:43.596343  274229 node_conditions.go:123] node cpu capacity is 8
	I1119 22:34:43.596358  274229 node_conditions.go:105] duration metric: took 2.619744ms to run NodePressure ...
	I1119 22:34:43.596372  274229 start.go:242] waiting for startup goroutines ...
	I1119 22:34:43.596385  274229 start.go:247] waiting for cluster config update ...
	I1119 22:34:43.596400  274229 start.go:256] writing updated cluster config ...
	I1119 22:34:43.596668  274229 ssh_runner.go:195] Run: rm -f paused
	I1119 22:34:43.652235  274229 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:34:43.654935  274229 out.go:179] * Done! kubectl is now configured to use "newest-cni-949690" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.048839952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.051468815Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f8a60a45-1472-4d14-8340-d6f19697caa9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.051796375Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e42f5660-dcfa-4341-b88d-53975f9cd043 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.052999819Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.053426695Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.053593617Z" level=info msg="Ran pod sandbox ed379431a560199c0aa092412a163ae33dfbdd386dab077512c7ed25c4e070a8 with infra container: kube-system/kube-proxy-f98bb/POD" id=f8a60a45-1472-4d14-8340-d6f19697caa9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.05424212Z" level=info msg="Ran pod sandbox ef938b7f052caa695acc4a3bb410f8ba9164c3d34e82978f54dc95dd61cb173d with infra container: kube-system/kindnet-fw45d/POD" id=e42f5660-dcfa-4341-b88d-53975f9cd043 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.054569854Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a67e1a00-fdfd-4031-979f-e27657110ffe name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.055086412Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=cc8c6cc1-d029-453c-87c4-8e4f9f3abbc7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.055395691Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c4a87a87-54d7-428e-965d-a4c05ad33a78 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.056008972Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7fabe749-ea5d-48af-82cf-bd3e4ab4775a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.056346583Z" level=info msg="Creating container: kube-system/kube-proxy-f98bb/kube-proxy" id=87cc1015-6553-47e8-bc40-2a09558713c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.056452552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.056944205Z" level=info msg="Creating container: kube-system/kindnet-fw45d/kindnet-cni" id=eb64d65b-c922-41a6-ad63-718735107261 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.05701118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.061118886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.061789822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.062049121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.062798533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.095278687Z" level=info msg="Created container 5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75: kube-system/kindnet-fw45d/kindnet-cni" id=eb64d65b-c922-41a6-ad63-718735107261 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.095902991Z" level=info msg="Starting container: 5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75" id=469e6df2-a38f-4344-a9d7-119605814381 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.097969554Z" level=info msg="Started container" PID=1052 containerID=5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75 description=kube-system/kindnet-fw45d/kindnet-cni id=469e6df2-a38f-4344-a9d7-119605814381 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef938b7f052caa695acc4a3bb410f8ba9164c3d34e82978f54dc95dd61cb173d
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.101573188Z" level=info msg="Created container a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb: kube-system/kube-proxy-f98bb/kube-proxy" id=87cc1015-6553-47e8-bc40-2a09558713c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.10215178Z" level=info msg="Starting container: a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb" id=3b7bdc85-5e39-441c-9bfc-05f3caf127eb name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.104756341Z" level=info msg="Started container" PID=1053 containerID=a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb description=kube-system/kube-proxy-f98bb/kube-proxy id=3b7bdc85-5e39-441c-9bfc-05f3caf127eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed379431a560199c0aa092412a163ae33dfbdd386dab077512c7ed25c4e070a8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5a6d9dc14195b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   ef938b7f052ca       kindnet-fw45d                               kube-system
	a3e1cf5d0f652       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 seconds ago       Running             kube-proxy                1                   ed379431a5601       kube-proxy-f98bb                            kube-system
	161338ce75f2c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   a8c511f195729       kube-apiserver-newest-cni-949690            kube-system
	e0bf6ee50782a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   7839be3bfd79d       etcd-newest-cni-949690                      kube-system
	272ca1f3b39d6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   dc9ee25f7a25c       kube-scheduler-newest-cni-949690            kube-system
	10b10591e7bf4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   e7782d5d44497       kube-controller-manager-newest-cni-949690   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-949690
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-949690
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=newest-cni-949690
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_34_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:34:09 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-949690
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:34:42 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:34:42 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:34:42 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 22:34:42 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-949690
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                76884ddf-0fb7-4736-8296-1d7cf95f4d03
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-949690                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-fw45d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-949690             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-949690    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-f98bb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-949690             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node newest-cni-949690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node newest-cni-949690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node newest-cni-949690 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-949690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  35s                kubelet          Node newest-cni-949690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     35s                kubelet          Node newest-cni-949690 status is now: NodeHasSufficientPID
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           31s                node-controller  Node newest-cni-949690 event: Registered Node newest-cni-949690 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet          Node newest-cni-949690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet          Node newest-cni-949690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 7s)    kubelet          Node newest-cni-949690 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           1s                 node-controller  Node newest-cni-949690 event: Registered Node newest-cni-949690 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [e0bf6ee50782a92838fb4803387070f4bb8156a7272beb735ace68c4880b3665] <==
	{"level":"warn","ts":"2025-11-19T22:34:41.505551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.511601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.527631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.534125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.541734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.549785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.556094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.561647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.567763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.577907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.584607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.591297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.597734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.604384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.610093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.616754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.622702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.629214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.635267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.641357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.647294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.662829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.668565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.674561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.723616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38128","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:34:46 up  1:17,  0 user,  load average: 2.76, 2.76, 1.90
	Linux newest-cni-949690 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75] <==
	I1119 22:34:43.229561       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:34:43.229879       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:34:43.230000       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:34:43.230016       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:34:43.230041       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:34:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:34:43.433068       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:34:43.433121       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:34:43.433133       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:34:43.433283       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [161338ce75f2c0c31e0d4eff895db0725cce94686d5b7734a4417e80450100be] <==
	I1119 22:34:42.166610       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:34:42.167088       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:34:42.167116       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:34:42.167173       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:34:42.167775       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:34:42.167881       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 22:34:42.167944       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 22:34:42.167951       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 22:34:42.168007       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:34:42.170326       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:34:42.175665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:34:42.177899       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 22:34:42.177921       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:34:42.199770       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:34:42.412247       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:34:42.437069       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:34:42.453658       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:34:42.460400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:34:42.466523       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:34:42.494412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.101.19"}
	I1119 22:34:42.504886       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.237.216"}
	I1119 22:34:43.081077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:34:45.704772       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:34:45.905135       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:34:46.005900       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [10b10591e7bf4e79373ab1993679cf4bedd7cef09257b365452ef6e96fcd19eb] <==
	I1119 22:34:45.488989       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:34:45.488996       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:34:45.489946       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:34:45.500447       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:34:45.500510       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:34:45.500535       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:34:45.500543       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:45.500558       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:34:45.500568       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:34:45.500585       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:34:45.500684       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:34:45.500837       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:34:45.500940       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:34:45.501974       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:34:45.502067       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:34:45.502173       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-949690"
	I1119 22:34:45.502237       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:34:45.503777       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:34:45.506660       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:34:45.508958       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:34:45.509051       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:34:45.511203       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:34:45.512754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:34:45.517900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:45.534237       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb] <==
	I1119 22:34:43.146680       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:34:43.220755       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:34:43.321604       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:34:43.321632       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:34:43.321715       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:34:43.340690       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:34:43.340735       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:34:43.345427       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:34:43.345777       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:34:43.345805       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:43.347202       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:34:43.347230       1 config.go:200] "Starting service config controller"
	I1119 22:34:43.347245       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:34:43.347235       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:34:43.347277       1 config.go:309] "Starting node config controller"
	I1119 22:34:43.347287       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:34:43.347294       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:34:43.348144       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:34:43.348172       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:34:43.447401       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:34:43.447511       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:34:43.449183       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [272ca1f3b39d6be33be2578290a83acd236c28ec785c58e870270aa855c550ea] <==
	I1119 22:34:40.687496       1 serving.go:386] Generated self-signed cert in-memory
	W1119 22:34:42.108664       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 22:34:42.108698       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:34:42.108709       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 22:34:42.108719       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 22:34:42.129274       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:34:42.129300       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:42.132374       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:34:42.132410       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:34:42.132866       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:34:42.132965       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:34:42.233501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:34:41 newest-cni-949690 kubelet[677]: E1119 22:34:41.776995     677 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-949690\" not found" node="newest-cni-949690"
	Nov 19 22:34:41 newest-cni-949690 kubelet[677]: E1119 22:34:41.777177     677 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-949690\" not found" node="newest-cni-949690"
	Nov 19 22:34:41 newest-cni-949690 kubelet[677]: E1119 22:34:41.777294     677 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-949690\" not found" node="newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.143555     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.199881     677 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.199982     677 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.200019     677 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.200911     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: E1119 22:34:42.253257     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-949690\" already exists" pod="kube-system/kube-scheduler-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.253290     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: E1119 22:34:42.258370     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-949690\" already exists" pod="kube-system/etcd-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.258402     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: E1119 22:34:42.264121     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-949690\" already exists" pod="kube-system/kube-apiserver-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.264152     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: E1119 22:34:42.269205     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-949690\" already exists" pod="kube-system/kube-controller-manager-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.739648     677 apiserver.go:52] "Watching apiserver"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.796473     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-cni-cfg\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.796523     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-lib-modules\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.796577     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-xtables-lock\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.843135     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.897231     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/391d2f06-e215-4d11-a63e-36749e0fdf39-lib-modules\") pod \"kube-proxy-f98bb\" (UID: \"391d2f06-e215-4d11-a63e-36749e0fdf39\") " pod="kube-system/kube-proxy-f98bb"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.897498     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/391d2f06-e215-4d11-a63e-36749e0fdf39-xtables-lock\") pod \"kube-proxy-f98bb\" (UID: \"391d2f06-e215-4d11-a63e-36749e0fdf39\") " pod="kube-system/kube-proxy-f98bb"
	Nov 19 22:34:44 newest-cni-949690 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:34:44 newest-cni-949690 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:34:44 newest-cni-949690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-949690 -n newest-cni-949690
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-949690 -n newest-cni-949690: exit status 2 (319.043445ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-949690 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wjbzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nh2v2 kubernetes-dashboard-855c9754f9-fcfs5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nh2v2 kubernetes-dashboard-855c9754f9-fcfs5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nh2v2 kubernetes-dashboard-855c9754f9-fcfs5: exit status 1 (64.953655ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wjbzn" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nh2v2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-fcfs5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nh2v2 kubernetes-dashboard-855c9754f9-fcfs5: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-949690
helpers_test.go:243: (dbg) docker inspect newest-cni-949690:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0",
	        "Created": "2025-11-19T22:33:56.785605734Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 274455,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:34:33.550080526Z",
	            "FinishedAt": "2025-11-19T22:34:32.288924029Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/hostname",
	        "HostsPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/hosts",
	        "LogPath": "/var/lib/docker/containers/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0/00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0-json.log",
	        "Name": "/newest-cni-949690",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-949690:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-949690",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00eedca978ffacc4c69002c20d8cc20c32f882c96b667710503822109ccc27a0",
	                "LowerDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/95b61aaa1ca1f1c411435a7de9b6c0ce104fa0f195b18468c542a269c636cb4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-949690",
	                "Source": "/var/lib/docker/volumes/newest-cni-949690/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-949690",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-949690",
	                "name.minikube.sigs.k8s.io": "newest-cni-949690",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1e11e584732a7423eed2a0b8bf2c915eda23fba5c63fabe4b6eed7f1411096a6",
	            "SandboxKey": "/var/run/docker/netns/1e11e584732a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-949690": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f9b0cf1aef5acfa5bdc194747c88b940e5b4d3be9960af2a5c8a6c56975f9e3f",
	                    "EndpointID": "6603ee62a8870508ff4af3e3fe4beeab2c9a2b8091dc76c0e2cabf2317e5e04e",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "8e:59:4f:0b:00:ed",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-949690",
	                        "00eedca978ff"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-949690 -n newest-cni-949690
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-949690 -n newest-cni-949690: exit status 2 (321.380198ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-949690 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-855818                                                                                                                                                                                                                     │ cert-expiration-855818       │ jenkins │ v1.37.0 │ 19 Nov 25 22:32 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ image   │ old-k8s-version-680619 image list --format=json                                                                                                                                                                                               │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p old-k8s-version-680619 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p old-k8s-version-680619                                                                                                                                                                                                                     │ old-k8s-version-680619       │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ delete  │ -p disable-driver-mounts-726490                                                                                                                                                                                                               │ disable-driver-mounts-726490 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ stop    │ -p embed-certs-443380 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-443380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p newest-cni-949690 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-949690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-409987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ image   │ newest-cni-949690 image list --format=json                                                                                                                                                                                                    │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ pause   │ -p newest-cni-949690 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-409987 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:34:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:34:33.330295  274229 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:34:33.330411  274229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:33.330420  274229 out.go:374] Setting ErrFile to fd 2...
	I1119 22:34:33.330425  274229 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:34:33.330632  274229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:34:33.331098  274229 out.go:368] Setting JSON to false
	I1119 22:34:33.332209  274229 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4621,"bootTime":1763587052,"procs":329,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:34:33.332295  274229 start.go:143] virtualization: kvm guest
	I1119 22:34:33.334075  274229 out.go:179] * [newest-cni-949690] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:34:33.335316  274229 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:34:33.335334  274229 notify.go:221] Checking for updates...
	I1119 22:34:33.337262  274229 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:34:33.338454  274229 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:33.339494  274229 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:34:33.340628  274229 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:34:33.341750  274229 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:34:33.343306  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:33.343856  274229 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:34:33.368362  274229 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:34:33.368450  274229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:33.423361  274229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:34:33.414091828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:33.423470  274229 docker.go:319] overlay module found
	I1119 22:34:33.425102  274229 out.go:179] * Using the docker driver based on existing profile
	I1119 22:34:33.426204  274229 start.go:309] selected driver: docker
	I1119 22:34:33.426217  274229 start.go:930] validating driver "docker" against &{Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:33.426303  274229 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:34:33.427062  274229 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:34:33.482572  274229 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-19 22:34:33.473412 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:34:33.482955  274229 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:34:33.482993  274229 cni.go:84] Creating CNI manager for ""
	I1119 22:34:33.483056  274229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:33.483099  274229 start.go:353] cluster config:
	{Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:33.484922  274229 out.go:179] * Starting "newest-cni-949690" primary control-plane node in "newest-cni-949690" cluster
	I1119 22:34:33.486015  274229 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:34:33.487084  274229 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:34:33.487995  274229 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:34:33.488024  274229 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:34:33.488046  274229 cache.go:65] Caching tarball of preloaded images
	I1119 22:34:33.488079  274229 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:34:33.488138  274229 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:34:33.488153  274229 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:34:33.488259  274229 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/config.json ...
	I1119 22:34:33.507602  274229 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:34:33.507617  274229 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:34:33.507631  274229 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:34:33.507654  274229 start.go:360] acquireMachinesLock for newest-cni-949690: {Name:mk317921465b37fc459423448fcaa153e30f6967 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:34:33.507709  274229 start.go:364] duration metric: took 39.568µs to acquireMachinesLock for "newest-cni-949690"
	I1119 22:34:33.507725  274229 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:34:33.507730  274229 fix.go:54] fixHost starting: 
	I1119 22:34:33.507951  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:33.524445  274229 fix.go:112] recreateIfNeeded on newest-cni-949690: state=Stopped err=<nil>
	W1119 22:34:33.524473  274229 fix.go:138] unexpected machine state, will restart: <nil>
	W1119 22:34:29.505783  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	W1119 22:34:31.506161  257842 node_ready.go:57] node "default-k8s-diff-port-409987" has "Ready":"False" status (will retry)
	I1119 22:34:33.011714  257842 node_ready.go:49] node "default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:33.011742  257842 node_ready.go:38] duration metric: took 41.008374378s for node "default-k8s-diff-port-409987" to be "Ready" ...
	I1119 22:34:33.011757  257842 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:34:33.011802  257842 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:34:33.024549  257842 api_server.go:72] duration metric: took 41.352426943s to wait for apiserver process to appear ...
	I1119 22:34:33.024573  257842 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:34:33.024593  257842 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:34:33.029923  257842 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1119 22:34:33.031006  257842 api_server.go:141] control plane version: v1.34.1
	I1119 22:34:33.031027  257842 api_server.go:131] duration metric: took 6.447983ms to wait for apiserver health ...
	I1119 22:34:33.031036  257842 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:34:33.034211  257842 system_pods.go:59] 8 kube-system pods found
	I1119 22:34:33.034250  257842 system_pods.go:61] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.034260  257842 system_pods.go:61] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.034272  257842 system_pods.go:61] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.034277  257842 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.034286  257842 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.034295  257842 system_pods.go:61] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.034300  257842 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.034308  257842 system_pods.go:61] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.034318  257842 system_pods.go:74] duration metric: took 3.273983ms to wait for pod list to return data ...
	I1119 22:34:33.034333  257842 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:34:33.036602  257842 default_sa.go:45] found service account: "default"
	I1119 22:34:33.036620  257842 default_sa.go:55] duration metric: took 2.277845ms for default service account to be created ...
	I1119 22:34:33.036630  257842 system_pods.go:116] waiting for k8s-apps to be running ...
	I1119 22:34:33.039135  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.039163  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.039169  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.039175  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.039178  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.039184  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.039191  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.039194  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.039199  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.039218  257842 retry.go:31] will retry after 283.539767ms: missing components: kube-dns
	I1119 22:34:33.329109  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.329139  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.329145  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.329150  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.329154  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.329157  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.329161  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.329164  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.329176  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1119 22:34:33.329193  257842 retry.go:31] will retry after 250.82065ms: missing components: kube-dns
	I1119 22:34:33.583473  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:33.583501  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1119 22:34:33.583507  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:33.583513  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:33.583516  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:33.583520  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:33.583524  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:33.583528  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:33.583531  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Running
	I1119 22:34:33.583545  257842 retry.go:31] will retry after 471.945976ms: missing components: kube-dns
	I1119 22:34:34.059943  257842 system_pods.go:86] 8 kube-system pods found
	I1119 22:34:34.059977  257842 system_pods.go:89] "coredns-66bc5c9577-jv7mb" [757d30b7-6575-4017-8ba6-dc22bcdf6d50] Running
	I1119 22:34:34.059986  257842 system_pods.go:89] "etcd-default-k8s-diff-port-409987" [333b61f4-f4ff-4412-8f4a-2f5c68b7ba1c] Running
	I1119 22:34:34.059993  257842 system_pods.go:89] "kindnet-8ks5v" [81448556-cfe0-4028-b73f-90d9da973381] Running
	I1119 22:34:34.059999  257842 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-409987" [4a4aee08-0b1d-4ee6-a03c-12ad9ae212c6] Running
	I1119 22:34:34.060005  257842 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-409987" [424010bb-8c3f-4220-8ec3-9ce006dee671] Running
	I1119 22:34:34.060011  257842 system_pods.go:89] "kube-proxy-ph6ff" [bb480349-c2e4-4b19-b60f-509c6fed52fc] Running
	I1119 22:34:34.060016  257842 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-409987" [09f3a0ad-2db8-4740-9530-36036961a24c] Running
	I1119 22:34:34.060021  257842 system_pods.go:89] "storage-provisioner" [47dbcf5a-2b68-4bc7-a96b-4310b64a0f0b] Running
	I1119 22:34:34.060030  257842 system_pods.go:126] duration metric: took 1.023393605s to wait for k8s-apps to be running ...
	I1119 22:34:34.060042  257842 system_svc.go:44] waiting for kubelet service to be running ....
	I1119 22:34:34.060088  257842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:34:34.085046  257842 system_svc.go:56] duration metric: took 24.992513ms WaitForService to wait for kubelet
	I1119 22:34:34.085085  257842 kubeadm.go:587] duration metric: took 42.412965914s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:34:34.085108  257842 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:34:34.088575  257842 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:34:34.088604  257842 node_conditions.go:123] node cpu capacity is 8
	I1119 22:34:34.088620  257842 node_conditions.go:105] duration metric: took 3.505513ms to run NodePressure ...
	I1119 22:34:34.088635  257842 start.go:242] waiting for startup goroutines ...
	I1119 22:34:34.088645  257842 start.go:247] waiting for cluster config update ...
	I1119 22:34:34.088659  257842 start.go:256] writing updated cluster config ...
	I1119 22:34:34.088995  257842 ssh_runner.go:195] Run: rm -f paused
	I1119 22:34:34.093920  257842 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:34:34.097808  257842 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jv7mb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.104292  257842 pod_ready.go:94] pod "coredns-66bc5c9577-jv7mb" is "Ready"
	I1119 22:34:34.104315  257842 pod_ready.go:86] duration metric: took 6.453567ms for pod "coredns-66bc5c9577-jv7mb" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.106517  257842 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.110627  257842 pod_ready.go:94] pod "etcd-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.110663  257842 pod_ready.go:86] duration metric: took 4.119698ms for pod "etcd-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.112556  257842 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.116315  257842 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.116335  257842 pod_ready.go:86] duration metric: took 3.757821ms for pod "kube-apiserver-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.118900  257842 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.497369  257842 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:34.497391  257842 pod_ready.go:86] duration metric: took 378.471441ms for pod "kube-controller-manager-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:34.698377  257842 pod_ready.go:83] waiting for pod "kube-proxy-ph6ff" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.097763  257842 pod_ready.go:94] pod "kube-proxy-ph6ff" is "Ready"
	I1119 22:34:35.097786  257842 pod_ready.go:86] duration metric: took 399.387132ms for pod "kube-proxy-ph6ff" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.297421  257842 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.697562  257842 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-409987" is "Ready"
	I1119 22:34:35.697595  257842 pod_ready.go:86] duration metric: took 400.149921ms for pod "kube-scheduler-default-k8s-diff-port-409987" in "kube-system" namespace to be "Ready" or be gone ...
	I1119 22:34:35.697609  257842 pod_ready.go:40] duration metric: took 1.60365602s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1119 22:34:35.740250  257842 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:34:35.742579  257842 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-409987" cluster and "default" namespace by default
	I1119 22:34:34.015894  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:34.016410  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:34.016473  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:34.016533  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:34.044039  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:34.044060  229026 cri.go:89] found id: ""
	I1119 22:34:34.044070  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:34.044121  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.048072  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:34.048123  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:34.085696  229026 cri.go:89] found id: ""
	I1119 22:34:34.085724  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.085736  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:34.085746  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:34.085851  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:34.120603  229026 cri.go:89] found id: ""
	I1119 22:34:34.120627  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.120636  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:34.120645  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:34.120708  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:34.145396  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:34.145417  229026 cri.go:89] found id: ""
	I1119 22:34:34.145428  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:34.145476  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.149506  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:34.149574  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:34.176649  229026 cri.go:89] found id: ""
	I1119 22:34:34.176674  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.176684  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:34.176691  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:34.176744  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:34.203378  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:34.203395  229026 cri.go:89] found id: ""
	I1119 22:34:34.203402  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:34.203443  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:34.207412  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:34.207488  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:34.233093  229026 cri.go:89] found id: ""
	I1119 22:34:34.233114  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.233121  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:34.233127  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:34.233168  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:34.259032  229026 cri.go:89] found id: ""
	I1119 22:34:34.259056  229026 logs.go:282] 0 containers: []
	W1119 22:34:34.259065  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:34.259076  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:34.259096  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:34.290407  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:34.290442  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:34.340448  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:34.340475  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:34.366016  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:34.366045  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:34.409566  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:34.409591  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:34.437163  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:34.437189  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:34.530916  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:34.530943  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:34.544403  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:34.544423  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:34.596039  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1119 22:34:33.445596  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:35.944355  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	I1119 22:34:33.525962  274229 out.go:252] * Restarting existing docker container for "newest-cni-949690" ...
	I1119 22:34:33.526026  274229 cli_runner.go:164] Run: docker start newest-cni-949690
	I1119 22:34:33.807804  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:33.826302  274229 kic.go:430] container "newest-cni-949690" state is running.
	I1119 22:34:33.826759  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:33.844694  274229 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/config.json ...
	I1119 22:34:33.844930  274229 machine.go:94] provisionDockerMachine start ...
	I1119 22:34:33.845009  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:33.863360  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:33.863582  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:33.863594  274229 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:34:33.864325  274229 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60156->127.0.0.1:33093: read: connection reset by peer
	I1119 22:34:36.995210  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-949690
	
	I1119 22:34:36.995240  274229 ubuntu.go:182] provisioning hostname "newest-cni-949690"
	I1119 22:34:36.995297  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.013235  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.013489  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.013510  274229 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-949690 && echo "newest-cni-949690" | sudo tee /etc/hostname
	I1119 22:34:37.147228  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-949690
	
	I1119 22:34:37.147327  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.168935  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.169231  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.169259  274229 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-949690' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-949690/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-949690' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:34:37.298184  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:34:37.298215  274229 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:34:37.298261  274229 ubuntu.go:190] setting up certificates
	I1119 22:34:37.298284  274229 provision.go:84] configureAuth start
	I1119 22:34:37.298344  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:37.319706  274229 provision.go:143] copyHostCerts
	I1119 22:34:37.319771  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:34:37.319788  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:34:37.319891  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:34:37.320027  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:34:37.320044  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:34:37.320101  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:34:37.320226  274229 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:34:37.320235  274229 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:34:37.320278  274229 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:34:37.320347  274229 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.newest-cni-949690 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-949690]
	I1119 22:34:37.636299  274229 provision.go:177] copyRemoteCerts
	I1119 22:34:37.636353  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:34:37.636390  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.656778  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:37.748239  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:34:37.765194  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1119 22:34:37.781622  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:34:37.797963  274229 provision.go:87] duration metric: took 499.66535ms to configureAuth
	I1119 22:34:37.797984  274229 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:34:37.798154  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:34:37.798258  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:37.817180  274229 main.go:143] libmachine: Using SSH client type: native
	I1119 22:34:37.817381  274229 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1119 22:34:37.817398  274229 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:34:38.091892  274229 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:34:38.091918  274229 machine.go:97] duration metric: took 4.246971119s to provisionDockerMachine
	I1119 22:34:38.091933  274229 start.go:293] postStartSetup for "newest-cni-949690" (driver="docker")
	I1119 22:34:38.091945  274229 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:34:38.092012  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:34:38.092060  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.109860  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.200247  274229 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:34:38.203527  274229 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:34:38.203577  274229 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:34:38.203589  274229 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:34:38.203630  274229 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:34:38.203698  274229 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:34:38.203800  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:34:38.211127  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:34:38.227108  274229 start.go:296] duration metric: took 135.165199ms for postStartSetup
	I1119 22:34:38.227183  274229 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:34:38.227217  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.245993  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.335573  274229 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:34:38.339698  274229 fix.go:56] duration metric: took 4.831963481s for fixHost
	I1119 22:34:38.339720  274229 start.go:83] releasing machines lock for "newest-cni-949690", held for 4.831999371s
	I1119 22:34:38.339779  274229 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-949690
	I1119 22:34:38.357434  274229 ssh_runner.go:195] Run: cat /version.json
	I1119 22:34:38.357469  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.357551  274229 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:34:38.357616  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:38.376364  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.376897  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:38.536558  274229 ssh_runner.go:195] Run: systemctl --version
	I1119 22:34:38.542682  274229 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:34:38.575491  274229 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:34:38.579783  274229 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:34:38.579849  274229 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:34:38.587790  274229 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:34:38.587811  274229 start.go:496] detecting cgroup driver to use...
	I1119 22:34:38.587851  274229 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:34:38.587888  274229 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:34:38.601596  274229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:34:38.612897  274229 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:34:38.612941  274229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:34:38.625963  274229 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:34:38.637676  274229 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:34:38.714790  274229 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:34:38.797980  274229 docker.go:234] disabling docker service ...
	I1119 22:34:38.798067  274229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:34:38.811449  274229 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:34:38.822900  274229 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:34:38.900719  274229 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:34:38.974367  274229 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:34:38.986294  274229 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:34:38.999467  274229 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:34:38.999519  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.008010  274229 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:34:39.008056  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.016160  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.024213  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.032158  274229 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:34:39.039615  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.047544  274229 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.055117  274229 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:34:39.063061  274229 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:34:39.069737  274229 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:34:39.076429  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:39.154204  274229 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:34:39.294029  274229 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:34:39.294103  274229 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:34:39.297873  274229 start.go:564] Will wait 60s for crictl version
	I1119 22:34:39.297923  274229 ssh_runner.go:195] Run: which crictl
	I1119 22:34:39.301294  274229 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:34:39.326952  274229 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:34:39.327014  274229 ssh_runner.go:195] Run: crio --version
	I1119 22:34:39.353361  274229 ssh_runner.go:195] Run: crio --version
	I1119 22:34:39.381895  274229 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:34:39.383022  274229 cli_runner.go:164] Run: docker network inspect newest-cni-949690 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:34:39.401052  274229 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1119 22:34:39.404988  274229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:34:39.416039  274229 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1119 22:34:39.417090  274229 kubeadm.go:884] updating cluster {Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:34:39.417211  274229 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:34:39.417261  274229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:34:39.448367  274229 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:34:39.448387  274229 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:34:39.448438  274229 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:34:39.472423  274229 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:34:39.472440  274229 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:34:39.472447  274229 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1119 22:34:39.472535  274229 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-949690 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:34:39.472590  274229 ssh_runner.go:195] Run: crio config
	I1119 22:34:39.517350  274229 cni.go:84] Creating CNI manager for ""
	I1119 22:34:39.517368  274229 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:34:39.517384  274229 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1119 22:34:39.517405  274229 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-949690 NodeName:newest-cni-949690 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:34:39.517529  274229 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-949690"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:34:39.517587  274229 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:34:39.525082  274229 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:34:39.525134  274229 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:34:39.532362  274229 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1119 22:34:39.544100  274229 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:34:39.555705  274229 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1119 22:34:39.567225  274229 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:34:39.570464  274229 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:34:39.579619  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:39.654928  274229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:39.677011  274229 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690 for IP: 192.168.103.2
	I1119 22:34:39.677035  274229 certs.go:195] generating shared ca certs ...
	I1119 22:34:39.677052  274229 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:39.677228  274229 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:34:39.677271  274229 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:34:39.677281  274229 certs.go:257] generating profile certs ...
	I1119 22:34:39.677353  274229 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/client.key
	I1119 22:34:39.677405  274229 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.key.7bbc5920
	I1119 22:34:39.677448  274229 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.key
	I1119 22:34:39.677558  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:34:39.677586  274229 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:34:39.677595  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:34:39.677616  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:34:39.677637  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:34:39.677658  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:34:39.677696  274229 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:34:39.678358  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:34:39.696474  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:34:39.715655  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:34:39.733546  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:34:39.755609  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1119 22:34:39.773052  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1119 22:34:39.789276  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:34:39.805231  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/newest-cni-949690/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:34:39.821006  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:34:39.836925  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:34:39.852723  274229 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:34:39.869749  274229 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:34:39.881905  274229 ssh_runner.go:195] Run: openssl version
	I1119 22:34:39.887500  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:34:39.895583  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.899048  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.899094  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:34:39.932764  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:34:39.939946  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:34:39.948790  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.952241  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.952289  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:34:39.986001  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:34:39.993282  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:34:40.001064  274229 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.004497  274229 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.004538  274229 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:34:40.038594  274229 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:34:40.046222  274229 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:34:40.049695  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:34:40.083766  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:34:40.116704  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:34:40.150004  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:34:40.189464  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:34:40.244982  274229 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:34:40.299485  274229 kubeadm.go:401] StartCluster: {Name:newest-cni-949690 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-949690 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:34:40.299589  274229 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:34:40.299646  274229 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:34:40.339137  274229 cri.go:89] found id: "161338ce75f2c0c31e0d4eff895db0725cce94686d5b7734a4417e80450100be"
	I1119 22:34:40.339168  274229 cri.go:89] found id: "e0bf6ee50782a92838fb4803387070f4bb8156a7272beb735ace68c4880b3665"
	I1119 22:34:40.339174  274229 cri.go:89] found id: "272ca1f3b39d6be33be2578290a83acd236c28ec785c58e870270aa855c550ea"
	I1119 22:34:40.339179  274229 cri.go:89] found id: "10b10591e7bf4e79373ab1993679cf4bedd7cef09257b365452ef6e96fcd19eb"
	I1119 22:34:40.339183  274229 cri.go:89] found id: ""
	I1119 22:34:40.339228  274229 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:34:40.355618  274229 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:34:40Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:34:40.355688  274229 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:34:40.365585  274229 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:34:40.365604  274229 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:34:40.365646  274229 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:34:40.374257  274229 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:34:40.375206  274229 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-949690" does not appear in /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:40.375750  274229 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-9335/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-949690" cluster setting kubeconfig missing "newest-cni-949690" context setting]
	I1119 22:34:40.376658  274229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.378292  274229 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:34:40.387104  274229 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1119 22:34:40.387173  274229 kubeadm.go:602] duration metric: took 21.562325ms to restartPrimaryControlPlane
	I1119 22:34:40.387180  274229 kubeadm.go:403] duration metric: took 87.702724ms to StartCluster
	I1119 22:34:40.387230  274229 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.387328  274229 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:34:40.390472  274229 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:34:40.390929  274229 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:34:40.390865  274229 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:34:40.391006  274229 addons.go:70] Setting default-storageclass=true in profile "newest-cni-949690"
	I1119 22:34:40.391021  274229 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-949690"
	I1119 22:34:40.390986  274229 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-949690"
	I1119 22:34:40.391198  274229 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-949690"
	I1119 22:34:40.391210  274229 config.go:182] Loaded profile config "newest-cni-949690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	W1119 22:34:40.391215  274229 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:34:40.391243  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.391358  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.390998  274229 addons.go:70] Setting dashboard=true in profile "newest-cni-949690"
	I1119 22:34:40.391554  274229 addons.go:239] Setting addon dashboard=true in "newest-cni-949690"
	W1119 22:34:40.391568  274229 addons.go:248] addon dashboard should already be in state true
	I1119 22:34:40.391602  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.391769  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.392330  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.393181  274229 out.go:179] * Verifying Kubernetes components...
	I1119 22:34:40.394355  274229 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:34:40.418872  274229 addons.go:239] Setting addon default-storageclass=true in "newest-cni-949690"
	W1119 22:34:40.418895  274229 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:34:40.418931  274229 host.go:66] Checking if "newest-cni-949690" exists ...
	I1119 22:34:40.419558  274229 cli_runner.go:164] Run: docker container inspect newest-cni-949690 --format={{.State.Status}}
	I1119 22:34:40.420933  274229 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:34:40.422045  274229 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:40.422131  274229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:34:40.422102  274229 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:34:40.422257  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.424181  274229 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:34:37.096614  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:37.096952  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:37.097004  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:37.097051  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:37.122255  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:37.122270  229026 cri.go:89] found id: ""
	I1119 22:34:37.122277  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:37.122315  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.125982  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:37.126034  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:37.151925  229026 cri.go:89] found id: ""
	I1119 22:34:37.151947  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.151958  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:37.151966  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:37.152013  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:37.179757  229026 cri.go:89] found id: ""
	I1119 22:34:37.179787  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.179796  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:37.179804  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:37.179872  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:37.205929  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:37.205950  229026 cri.go:89] found id: ""
	I1119 22:34:37.205958  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:37.205997  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.210370  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:37.210444  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:37.236133  229026 cri.go:89] found id: ""
	I1119 22:34:37.236156  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.236167  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:37.236174  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:37.236214  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:37.262353  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:37.262376  229026 cri.go:89] found id: ""
	I1119 22:34:37.262385  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:37.262441  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:37.265937  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:37.266000  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:37.290077  229026 cri.go:89] found id: ""
	I1119 22:34:37.290098  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.290110  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:37.290117  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:37.290164  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:37.319419  229026 cri.go:89] found id: ""
	I1119 22:34:37.319450  229026 logs.go:282] 0 containers: []
	W1119 22:34:37.319460  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:37.319471  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:37.319482  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:37.345953  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:37.345976  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:37.400020  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:37.400046  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:37.430213  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:37.430235  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:37.524121  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:37.524145  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:37.537558  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:37.537581  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:37.595781  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:34:37.595828  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:37.595844  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:37.627780  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:37.627843  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.185908  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:40.186311  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:40.186357  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:40.186404  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:40.220121  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:40.220146  229026 cri.go:89] found id: ""
	I1119 22:34:40.220157  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:40.220214  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.224985  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:40.225047  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:40.261312  229026 cri.go:89] found id: ""
	I1119 22:34:40.261335  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.261344  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:40.261351  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:40.261431  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:40.297591  229026 cri.go:89] found id: ""
	I1119 22:34:40.297635  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.297646  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:40.297654  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:40.297722  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:40.337446  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.337482  229026 cri.go:89] found id: ""
	I1119 22:34:40.337492  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:40.337546  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.342719  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:40.342786  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:40.373745  229026 cri.go:89] found id: ""
	I1119 22:34:40.373807  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.373849  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:40.373868  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:40.373953  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:40.412795  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:40.412900  229026 cri.go:89] found id: ""
	I1119 22:34:40.412913  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:40.413150  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:40.418705  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:40.418809  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:40.465960  229026 cri.go:89] found id: ""
	I1119 22:34:40.465984  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.465993  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:40.466000  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:40.466057  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:40.504267  229026 cri.go:89] found id: ""
	I1119 22:34:40.504302  229026 logs.go:282] 0 containers: []
	W1119 22:34:40.504312  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:40.504323  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:40.504337  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:40.584015  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:40.584054  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:40.621545  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:40.621610  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:40.691569  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:40.691595  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:40.727716  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:40.727781  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:40.860310  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:40.860338  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:40.875366  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:40.875392  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:40.933153  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:34:40.933174  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:40.933190  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:40.425251  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:34:40.425307  274229 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:34:40.425373  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.450381  274229 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:40.450443  274229 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:34:40.450498  274229 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-949690
	I1119 22:34:40.465605  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.471214  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.480709  274229 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/newest-cni-949690/id_rsa Username:docker}
	I1119 22:34:40.552184  274229 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:34:40.565175  274229 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:34:40.565246  274229 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:34:40.580208  274229 api_server.go:72] duration metric: took 189.242933ms to wait for apiserver process to appear ...
	I1119 22:34:40.580230  274229 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:34:40.580246  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:40.585735  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:34:40.588059  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:34:40.588076  274229 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:34:40.591600  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:34:40.608904  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:34:40.609009  274229 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:34:40.627861  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:34:40.627885  274229 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 22:34:40.647791  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:34:40.647810  274229 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:34:40.668060  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:34:40.668081  274229 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:34:40.683240  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:34:40.683259  274229 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:34:40.695552  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:34:40.695625  274229 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:34:40.711790  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:34:40.711809  274229 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:34:40.726528  274229 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:34:40.726545  274229 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:34:40.739660  274229 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:34:42.105661  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:34:42.105695  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:34:42.105714  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:42.112858  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1119 22:34:42.112885  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1119 22:34:42.581271  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:42.585719  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:34:42.585741  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:34:42.589179  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.003414434s)
	I1119 22:34:42.589245  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.997622304s)
	I1119 22:34:42.589349  274229 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.849664457s)
	I1119 22:34:42.590952  274229 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-949690 addons enable metrics-server
	
	I1119 22:34:42.600811  274229 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1119 22:34:37.944754  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:39.945116  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:41.945409  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	I1119 22:34:42.601888  274229 addons.go:515] duration metric: took 2.211028002s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1119 22:34:43.081164  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:43.086527  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:34:43.086560  274229 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:34:43.581251  274229 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1119 22:34:43.586009  274229 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1119 22:34:43.587182  274229 api_server.go:141] control plane version: v1.34.1
	I1119 22:34:43.587213  274229 api_server.go:131] duration metric: took 3.006975576s to wait for apiserver health ...
	I1119 22:34:43.587226  274229 system_pods.go:43] waiting for kube-system pods to appear ...
	I1119 22:34:43.591079  274229 system_pods.go:59] 8 kube-system pods found
	I1119 22:34:43.591111  274229 system_pods.go:61] "coredns-66bc5c9577-wjbzn" [be4fac81-534c-4a17-b208-8ad44d7e9504] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:34:43.591123  274229 system_pods.go:61] "etcd-newest-cni-949690" [77f0100c-0902-434d-9782-9ff8d579d2fb] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1119 22:34:43.591137  274229 system_pods.go:61] "kindnet-fw45d" [b409ae83-4d6c-42a0-a436-2159f75e1458] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1119 22:34:43.591151  274229 system_pods.go:61] "kube-apiserver-newest-cni-949690" [8dce48d6-c1e0-4cae-a68a-c5dbf4a62adb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1119 22:34:43.591162  274229 system_pods.go:61] "kube-controller-manager-newest-cni-949690" [f61aadf5-fe6a-4566-a44e-f98c9b09b812] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1119 22:34:43.591180  274229 system_pods.go:61] "kube-proxy-f98bb" [391d2f06-e215-4d11-a63e-36749e0fdf39] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1119 22:34:43.591189  274229 system_pods.go:61] "kube-scheduler-newest-cni-949690" [04596963-6c61-45c1-bbcb-59e57760f2b2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1119 22:34:43.591199  274229 system_pods.go:61] "storage-provisioner" [11651cac-2eb3-47f8-be2c-b30375bc4461] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1119 22:34:43.591207  274229 system_pods.go:74] duration metric: took 3.971817ms to wait for pod list to return data ...
	I1119 22:34:43.591213  274229 default_sa.go:34] waiting for default service account to be created ...
	I1119 22:34:43.593694  274229 default_sa.go:45] found service account: "default"
	I1119 22:34:43.593711  274229 default_sa.go:55] duration metric: took 2.491157ms for default service account to be created ...
	I1119 22:34:43.593720  274229 kubeadm.go:587] duration metric: took 3.202759307s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1119 22:34:43.593733  274229 node_conditions.go:102] verifying NodePressure condition ...
	I1119 22:34:43.596317  274229 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1119 22:34:43.596343  274229 node_conditions.go:123] node cpu capacity is 8
	I1119 22:34:43.596358  274229 node_conditions.go:105] duration metric: took 2.619744ms to run NodePressure ...
	I1119 22:34:43.596372  274229 start.go:242] waiting for startup goroutines ...
	I1119 22:34:43.596385  274229 start.go:247] waiting for cluster config update ...
	I1119 22:34:43.596400  274229 start.go:256] writing updated cluster config ...
	I1119 22:34:43.596668  274229 ssh_runner.go:195] Run: rm -f paused
	I1119 22:34:43.652235  274229 start.go:628] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1119 22:34:43.654935  274229 out.go:179] * Done! kubectl is now configured to use "newest-cni-949690" cluster and "default" namespace by default
	I1119 22:34:43.470876  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:34:43.471189  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:34:43.471248  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:34:43.471305  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:34:43.498537  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:43.498560  229026 cri.go:89] found id: ""
	I1119 22:34:43.498571  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:34:43.498625  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:43.502591  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:34:43.502643  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:34:43.528237  229026 cri.go:89] found id: ""
	I1119 22:34:43.528260  229026 logs.go:282] 0 containers: []
	W1119 22:34:43.528268  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:34:43.528274  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:34:43.528326  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:34:43.554384  229026 cri.go:89] found id: ""
	I1119 22:34:43.554411  229026 logs.go:282] 0 containers: []
	W1119 22:34:43.554421  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:34:43.554429  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:34:43.554484  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:34:43.581136  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:43.581158  229026 cri.go:89] found id: ""
	I1119 22:34:43.581168  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:34:43.581216  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:43.585138  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:34:43.585204  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:34:43.616330  229026 cri.go:89] found id: ""
	I1119 22:34:43.616356  229026 logs.go:282] 0 containers: []
	W1119 22:34:43.616365  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:34:43.616373  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:34:43.616427  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:34:43.644685  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:43.644706  229026 cri.go:89] found id: ""
	I1119 22:34:43.644713  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:34:43.644767  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:34:43.648785  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:34:43.648860  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:34:43.677478  229026 cri.go:89] found id: ""
	I1119 22:34:43.677531  229026 logs.go:282] 0 containers: []
	W1119 22:34:43.677555  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:34:43.677574  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:34:43.677630  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:34:43.704561  229026 cri.go:89] found id: ""
	I1119 22:34:43.704585  229026 logs.go:282] 0 containers: []
	W1119 22:34:43.704594  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:34:43.704605  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:34:43.704620  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:34:43.739689  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:34:43.739720  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:34:43.799758  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:34:43.799785  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:34:43.828101  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:34:43.828135  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:34:43.872752  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:34:43.872779  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:34:43.904942  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:34:43.904974  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:34:44.030363  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:34:44.030390  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:34:44.045931  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:34:44.045960  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:34:44.117148  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1119 22:34:43.945944  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	W1119 22:34:46.445225  269329 pod_ready.go:104] pod "coredns-66bc5c9577-jmjmf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.048839952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.051468815Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f8a60a45-1472-4d14-8340-d6f19697caa9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.051796375Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e42f5660-dcfa-4341-b88d-53975f9cd043 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.052999819Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.053426695Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.053593617Z" level=info msg="Ran pod sandbox ed379431a560199c0aa092412a163ae33dfbdd386dab077512c7ed25c4e070a8 with infra container: kube-system/kube-proxy-f98bb/POD" id=f8a60a45-1472-4d14-8340-d6f19697caa9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.05424212Z" level=info msg="Ran pod sandbox ef938b7f052caa695acc4a3bb410f8ba9164c3d34e82978f54dc95dd61cb173d with infra container: kube-system/kindnet-fw45d/POD" id=e42f5660-dcfa-4341-b88d-53975f9cd043 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.054569854Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a67e1a00-fdfd-4031-979f-e27657110ffe name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.055086412Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=cc8c6cc1-d029-453c-87c4-8e4f9f3abbc7 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.055395691Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=c4a87a87-54d7-428e-965d-a4c05ad33a78 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.056008972Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=7fabe749-ea5d-48af-82cf-bd3e4ab4775a name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.056346583Z" level=info msg="Creating container: kube-system/kube-proxy-f98bb/kube-proxy" id=87cc1015-6553-47e8-bc40-2a09558713c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.056452552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.056944205Z" level=info msg="Creating container: kube-system/kindnet-fw45d/kindnet-cni" id=eb64d65b-c922-41a6-ad63-718735107261 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.05701118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.061118886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.061789822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.062049121Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.062798533Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.095278687Z" level=info msg="Created container 5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75: kube-system/kindnet-fw45d/kindnet-cni" id=eb64d65b-c922-41a6-ad63-718735107261 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.095902991Z" level=info msg="Starting container: 5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75" id=469e6df2-a38f-4344-a9d7-119605814381 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.097969554Z" level=info msg="Started container" PID=1052 containerID=5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75 description=kube-system/kindnet-fw45d/kindnet-cni id=469e6df2-a38f-4344-a9d7-119605814381 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef938b7f052caa695acc4a3bb410f8ba9164c3d34e82978f54dc95dd61cb173d
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.101573188Z" level=info msg="Created container a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb: kube-system/kube-proxy-f98bb/kube-proxy" id=87cc1015-6553-47e8-bc40-2a09558713c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.10215178Z" level=info msg="Starting container: a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb" id=3b7bdc85-5e39-441c-9bfc-05f3caf127eb name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:43 newest-cni-949690 crio[521]: time="2025-11-19T22:34:43.104756341Z" level=info msg="Started container" PID=1053 containerID=a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb description=kube-system/kube-proxy-f98bb/kube-proxy id=3b7bdc85-5e39-441c-9bfc-05f3caf127eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=ed379431a560199c0aa092412a163ae33dfbdd386dab077512c7ed25c4e070a8
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5a6d9dc14195b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   ef938b7f052ca       kindnet-fw45d                               kube-system
	a3e1cf5d0f652       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   ed379431a5601       kube-proxy-f98bb                            kube-system
	161338ce75f2c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   a8c511f195729       kube-apiserver-newest-cni-949690            kube-system
	e0bf6ee50782a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   7839be3bfd79d       etcd-newest-cni-949690                      kube-system
	272ca1f3b39d6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   dc9ee25f7a25c       kube-scheduler-newest-cni-949690            kube-system
	10b10591e7bf4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   e7782d5d44497       kube-controller-manager-newest-cni-949690   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-949690
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-949690
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=newest-cni-949690
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_34_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:34:09 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-949690
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:34:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:34:42 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:34:42 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:34:42 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 19 Nov 2025 22:34:42 +0000   Wed, 19 Nov 2025 22:34:07 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-949690
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                76884ddf-0fb7-4736-8296-1d7cf95f4d03
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-949690                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-fw45d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-949690             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-949690    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-f98bb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-949690             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s (x8 over 42s)  kubelet          Node newest-cni-949690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s (x8 over 42s)  kubelet          Node newest-cni-949690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s (x8 over 42s)  kubelet          Node newest-cni-949690 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    37s                kubelet          Node newest-cni-949690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  37s                kubelet          Node newest-cni-949690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     37s                kubelet          Node newest-cni-949690 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           33s                node-controller  Node newest-cni-949690 event: Registered Node newest-cni-949690 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node newest-cni-949690 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node newest-cni-949690 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 9s)    kubelet          Node newest-cni-949690 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-949690 event: Registered Node newest-cni-949690 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [e0bf6ee50782a92838fb4803387070f4bb8156a7272beb735ace68c4880b3665] <==
	{"level":"warn","ts":"2025-11-19T22:34:41.505551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.511601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.527631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.534125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.541734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.549785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.556094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.561647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.567763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.577907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.584607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.591297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.597734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.604384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.610093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.616754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.622702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.629214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.635267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.641357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.647294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.662829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.668565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.674561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:41.723616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38128","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:34:48 up  1:17,  0 user,  load average: 2.76, 2.76, 1.90
	Linux newest-cni-949690 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5a6d9dc14195b9b242a6301f8b7a66e364d6579075b69f40a82dee8c70a14e75] <==
	I1119 22:34:43.229561       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:34:43.229879       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1119 22:34:43.230000       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:34:43.230016       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:34:43.230041       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:34:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:34:43.433068       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:34:43.433121       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:34:43.433133       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:34:43.433283       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [161338ce75f2c0c31e0d4eff895db0725cce94686d5b7734a4417e80450100be] <==
	I1119 22:34:42.166610       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:34:42.167088       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:34:42.167116       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:34:42.167173       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:34:42.167775       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:34:42.167881       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1119 22:34:42.167944       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 22:34:42.167951       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 22:34:42.168007       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:34:42.170326       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:34:42.175665       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:34:42.177899       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 22:34:42.177921       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:34:42.199770       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:34:42.412247       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:34:42.437069       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:34:42.453658       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:34:42.460400       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:34:42.466523       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:34:42.494412       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.101.19"}
	I1119 22:34:42.504886       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.237.216"}
	I1119 22:34:43.081077       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:34:45.704772       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:34:45.905135       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:34:46.005900       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [10b10591e7bf4e79373ab1993679cf4bedd7cef09257b365452ef6e96fcd19eb] <==
	I1119 22:34:45.488989       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1119 22:34:45.488996       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1119 22:34:45.489946       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:34:45.500447       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:34:45.500510       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:34:45.500535       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:34:45.500543       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:45.500558       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:34:45.500568       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:34:45.500585       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1119 22:34:45.500684       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:34:45.500837       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:34:45.500940       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:34:45.501974       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:34:45.502067       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:34:45.502173       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-949690"
	I1119 22:34:45.502237       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1119 22:34:45.503777       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:34:45.506660       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1119 22:34:45.508958       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:34:45.509051       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:34:45.511203       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1119 22:34:45.512754       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1119 22:34:45.517900       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:45.534237       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [a3e1cf5d0f6522f0e93770a58c8aa00bc49525b93f47672a1d84eb8e5181a5eb] <==
	I1119 22:34:43.146680       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:34:43.220755       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:34:43.321604       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:34:43.321632       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1119 22:34:43.321715       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:34:43.340690       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:34:43.340735       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:34:43.345427       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:34:43.345777       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:34:43.345805       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:43.347202       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:34:43.347230       1 config.go:200] "Starting service config controller"
	I1119 22:34:43.347245       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:34:43.347235       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:34:43.347277       1 config.go:309] "Starting node config controller"
	I1119 22:34:43.347287       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:34:43.347294       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:34:43.348144       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:34:43.348172       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:34:43.447401       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:34:43.447511       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:34:43.449183       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [272ca1f3b39d6be33be2578290a83acd236c28ec785c58e870270aa855c550ea] <==
	I1119 22:34:40.687496       1 serving.go:386] Generated self-signed cert in-memory
	W1119 22:34:42.108664       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 22:34:42.108698       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:34:42.108709       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 22:34:42.108719       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 22:34:42.129274       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:34:42.129300       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:42.132374       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:34:42.132410       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:34:42.132866       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:34:42.132965       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:34:42.233501       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:34:41 newest-cni-949690 kubelet[677]: E1119 22:34:41.776995     677 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-949690\" not found" node="newest-cni-949690"
	Nov 19 22:34:41 newest-cni-949690 kubelet[677]: E1119 22:34:41.777177     677 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-949690\" not found" node="newest-cni-949690"
	Nov 19 22:34:41 newest-cni-949690 kubelet[677]: E1119 22:34:41.777294     677 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-949690\" not found" node="newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.143555     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.199881     677 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.199982     677 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.200019     677 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.200911     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: E1119 22:34:42.253257     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-949690\" already exists" pod="kube-system/kube-scheduler-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.253290     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: E1119 22:34:42.258370     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-949690\" already exists" pod="kube-system/etcd-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.258402     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: E1119 22:34:42.264121     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-949690\" already exists" pod="kube-system/kube-apiserver-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.264152     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: E1119 22:34:42.269205     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-949690\" already exists" pod="kube-system/kube-controller-manager-newest-cni-949690"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.739648     677 apiserver.go:52] "Watching apiserver"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.796473     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-cni-cfg\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.796523     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-lib-modules\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.796577     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b409ae83-4d6c-42a0-a436-2159f75e1458-xtables-lock\") pod \"kindnet-fw45d\" (UID: \"b409ae83-4d6c-42a0-a436-2159f75e1458\") " pod="kube-system/kindnet-fw45d"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.843135     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.897231     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/391d2f06-e215-4d11-a63e-36749e0fdf39-lib-modules\") pod \"kube-proxy-f98bb\" (UID: \"391d2f06-e215-4d11-a63e-36749e0fdf39\") " pod="kube-system/kube-proxy-f98bb"
	Nov 19 22:34:42 newest-cni-949690 kubelet[677]: I1119 22:34:42.897498     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/391d2f06-e215-4d11-a63e-36749e0fdf39-xtables-lock\") pod \"kube-proxy-f98bb\" (UID: \"391d2f06-e215-4d11-a63e-36749e0fdf39\") " pod="kube-system/kube-proxy-f98bb"
	Nov 19 22:34:44 newest-cni-949690 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:34:44 newest-cni-949690 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:34:44 newest-cni-949690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-949690 -n newest-cni-949690
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-949690 -n newest-cni-949690: exit status 2 (310.58058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-949690 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-wjbzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nh2v2 kubernetes-dashboard-855c9754f9-fcfs5
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nh2v2 kubernetes-dashboard-855c9754f9-fcfs5
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nh2v2 kubernetes-dashboard-855c9754f9-fcfs5: exit status 1 (60.087201ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-wjbzn" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-nh2v2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-fcfs5" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-949690 describe pod coredns-66bc5c9577-wjbzn storage-provisioner dashboard-metrics-scraper-6ffb444bf9-nh2v2 kubernetes-dashboard-855c9754f9-fcfs5: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-443380 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-443380 --alsologtostderr -v=1: exit status 80 (1.918218816s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-443380 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:35:11.156876  285734 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:35:11.157111  285734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:11.157120  285734 out.go:374] Setting ErrFile to fd 2...
	I1119 22:35:11.157124  285734 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:11.157320  285734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:35:11.157543  285734 out.go:368] Setting JSON to false
	I1119 22:35:11.157594  285734 mustload.go:66] Loading cluster: embed-certs-443380
	I1119 22:35:11.158009  285734 config.go:182] Loaded profile config "embed-certs-443380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:35:11.158439  285734 cli_runner.go:164] Run: docker container inspect embed-certs-443380 --format={{.State.Status}}
	I1119 22:35:11.179682  285734 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:35:11.179963  285734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:11.245636  285734 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-19 22:35:11.234927489 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:35:11.246429  285734 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-443380 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:35:11.248494  285734 out.go:179] * Pausing node embed-certs-443380 ... 
	I1119 22:35:11.249679  285734 host.go:66] Checking if "embed-certs-443380" exists ...
	I1119 22:35:11.249994  285734 ssh_runner.go:195] Run: systemctl --version
	I1119 22:35:11.250048  285734 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-443380
	I1119 22:35:11.270152  285734 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/embed-certs-443380/id_rsa Username:docker}
	I1119 22:35:11.370148  285734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:35:11.392415  285734 pause.go:52] kubelet running: true
	I1119 22:35:11.392474  285734 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:35:11.598127  285734 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:35:11.598260  285734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:35:11.680589  285734 cri.go:89] found id: "e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb"
	I1119 22:35:11.680620  285734 cri.go:89] found id: "df257a051a08e4a48e737e015b9042f67752f43236a8a79e391dd2ec99c2c20c"
	I1119 22:35:11.680627  285734 cri.go:89] found id: "1a691b92d4e519b79315b18ca34f25853a37f7381a8be39393abe3dd2e5fc138"
	I1119 22:35:11.680631  285734 cri.go:89] found id: "3034e4e70b518b88b7e724642f117283a7941f0f13aabe55e1d1c03789730810"
	I1119 22:35:11.680635  285734 cri.go:89] found id: "4ca72c190c2ac15c8d89f95f3c61a04b635604352eb3e300c4f8e35cb5f03acd"
	I1119 22:35:11.680640  285734 cri.go:89] found id: "847f5d7dba3ab17916fecc3496f64e3c432a7aea38029dc58d6ca5c607f49bf4"
	I1119 22:35:11.680644  285734 cri.go:89] found id: "f2e2adcbdf2ed28a414676c53047f68a57fcf6fb525c42cea338059bedb6224c"
	I1119 22:35:11.680648  285734 cri.go:89] found id: "185e753f982bb76405831c8b358ebdfd082e42f64259200ff2771e2287ccd2a7"
	I1119 22:35:11.680652  285734 cri.go:89] found id: "de4131eab48f0dd8d34f317e598532f0311ff6539bf32deb7148043cda0db569"
	I1119 22:35:11.680659  285734 cri.go:89] found id: "dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	I1119 22:35:11.680663  285734 cri.go:89] found id: "f8a07463feed4f6a4d7c8e5e4b1d14a47cab1a7fa1ce43c84aba5ba99da95c3f"
	I1119 22:35:11.680667  285734 cri.go:89] found id: ""
	I1119 22:35:11.680709  285734 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:35:11.693880  285734 retry.go:31] will retry after 201.69492ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:35:11Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:35:11.896331  285734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:35:11.913719  285734 pause.go:52] kubelet running: false
	I1119 22:35:11.913776  285734 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:35:12.181260  285734 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:35:12.181333  285734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:35:12.272110  285734 cri.go:89] found id: "e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb"
	I1119 22:35:12.272137  285734 cri.go:89] found id: "df257a051a08e4a48e737e015b9042f67752f43236a8a79e391dd2ec99c2c20c"
	I1119 22:35:12.272143  285734 cri.go:89] found id: "1a691b92d4e519b79315b18ca34f25853a37f7381a8be39393abe3dd2e5fc138"
	I1119 22:35:12.272148  285734 cri.go:89] found id: "3034e4e70b518b88b7e724642f117283a7941f0f13aabe55e1d1c03789730810"
	I1119 22:35:12.272152  285734 cri.go:89] found id: "4ca72c190c2ac15c8d89f95f3c61a04b635604352eb3e300c4f8e35cb5f03acd"
	I1119 22:35:12.272158  285734 cri.go:89] found id: "847f5d7dba3ab17916fecc3496f64e3c432a7aea38029dc58d6ca5c607f49bf4"
	I1119 22:35:12.272162  285734 cri.go:89] found id: "f2e2adcbdf2ed28a414676c53047f68a57fcf6fb525c42cea338059bedb6224c"
	I1119 22:35:12.272167  285734 cri.go:89] found id: "185e753f982bb76405831c8b358ebdfd082e42f64259200ff2771e2287ccd2a7"
	I1119 22:35:12.272172  285734 cri.go:89] found id: "de4131eab48f0dd8d34f317e598532f0311ff6539bf32deb7148043cda0db569"
	I1119 22:35:12.272186  285734 cri.go:89] found id: "dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	I1119 22:35:12.272191  285734 cri.go:89] found id: "f8a07463feed4f6a4d7c8e5e4b1d14a47cab1a7fa1ce43c84aba5ba99da95c3f"
	I1119 22:35:12.272195  285734 cri.go:89] found id: ""
	I1119 22:35:12.272256  285734 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:35:12.285707  285734 retry.go:31] will retry after 430.469883ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:35:12Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:35:12.716339  285734 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:35:12.728948  285734 pause.go:52] kubelet running: false
	I1119 22:35:12.729003  285734 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:35:12.915717  285734 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:35:12.915808  285734 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:35:12.984031  285734 cri.go:89] found id: "e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb"
	I1119 22:35:12.984061  285734 cri.go:89] found id: "df257a051a08e4a48e737e015b9042f67752f43236a8a79e391dd2ec99c2c20c"
	I1119 22:35:12.984067  285734 cri.go:89] found id: "1a691b92d4e519b79315b18ca34f25853a37f7381a8be39393abe3dd2e5fc138"
	I1119 22:35:12.984073  285734 cri.go:89] found id: "3034e4e70b518b88b7e724642f117283a7941f0f13aabe55e1d1c03789730810"
	I1119 22:35:12.984078  285734 cri.go:89] found id: "4ca72c190c2ac15c8d89f95f3c61a04b635604352eb3e300c4f8e35cb5f03acd"
	I1119 22:35:12.984082  285734 cri.go:89] found id: "847f5d7dba3ab17916fecc3496f64e3c432a7aea38029dc58d6ca5c607f49bf4"
	I1119 22:35:12.984087  285734 cri.go:89] found id: "f2e2adcbdf2ed28a414676c53047f68a57fcf6fb525c42cea338059bedb6224c"
	I1119 22:35:12.984091  285734 cri.go:89] found id: "185e753f982bb76405831c8b358ebdfd082e42f64259200ff2771e2287ccd2a7"
	I1119 22:35:12.984094  285734 cri.go:89] found id: "de4131eab48f0dd8d34f317e598532f0311ff6539bf32deb7148043cda0db569"
	I1119 22:35:12.984116  285734 cri.go:89] found id: "dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	I1119 22:35:12.984122  285734 cri.go:89] found id: "f8a07463feed4f6a4d7c8e5e4b1d14a47cab1a7fa1ce43c84aba5ba99da95c3f"
	I1119 22:35:12.984125  285734 cri.go:89] found id: ""
	I1119 22:35:12.984161  285734 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:35:13.000026  285734 out.go:203] 
	W1119 22:35:13.001243  285734 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:35:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:35:13.001276  285734 out.go:285] * 
	* 
	W1119 22:35:13.006309  285734 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:35:13.007486  285734 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-443380 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-443380
helpers_test.go:243: (dbg) docker inspect embed-certs-443380:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49",
	        "Created": "2025-11-19T22:33:06.74702883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269527,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:34:08.144483977Z",
	            "FinishedAt": "2025-11-19T22:34:07.194772047Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/hosts",
	        "LogPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49-json.log",
	        "Name": "/embed-certs-443380",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-443380:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-443380",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49",
	                "LowerDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-443380",
	                "Source": "/var/lib/docker/volumes/embed-certs-443380/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-443380",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-443380",
	                "name.minikube.sigs.k8s.io": "embed-certs-443380",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "45c768282ab68ea0c7a4cde1e8b10df00b9465abdf3bddadfb2aac195203ba32",
	            "SandboxKey": "/var/run/docker/netns/45c768282ab6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-443380": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "79be9ba27c325ef564b730d7c6a14208f6797c8013b71ad28befe3377b076629",
	                    "EndpointID": "98a0e06e8d2b39a8382e2d37248cc7ef5f99817b4e5a6ec715ff230711ef3ea1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "3a:f8:c9:9e:ca:2f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-443380",
	                        "f1d90b7b5af6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443380 -n embed-certs-443380
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443380 -n embed-certs-443380: exit status 2 (346.408943ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-443380 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-443380 logs -n 25: (1.042924821s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ stop    │ -p embed-certs-443380 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-443380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p newest-cni-949690 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-949690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-409987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ image   │ newest-cni-949690 image list --format=json                                                                                                                                                                                                    │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ pause   │ -p newest-cni-949690 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-409987 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p newest-cni-949690                                                                                                                                                                                                                          │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p newest-cni-949690                                                                                                                                                                                                                          │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p auto-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-409987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ image   │ embed-certs-443380 image list --format=json                                                                                                                                                                                                   │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ pause   │ -p embed-certs-443380 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:35:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:35:02.669855  283427 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:35:02.669967  283427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:02.669974  283427 out.go:374] Setting ErrFile to fd 2...
	I1119 22:35:02.669980  283427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:02.670219  283427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:35:02.670653  283427 out.go:368] Setting JSON to false
	I1119 22:35:02.671776  283427 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4651,"bootTime":1763587052,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:35:02.671889  283427 start.go:143] virtualization: kvm guest
	I1119 22:35:02.676939  283427 out.go:179] * [default-k8s-diff-port-409987] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:35:02.678186  283427 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:35:02.678186  283427 notify.go:221] Checking for updates...
	I1119 22:35:02.679327  283427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:35:02.680737  283427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:35:02.682364  283427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:35:02.683549  283427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:35:02.686211  283427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:35:02.687617  283427 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:35:02.688102  283427 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:35:02.715631  283427 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:35:02.715778  283427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:02.776833  283427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:35:02.766780998 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:35:02.777010  283427 docker.go:319] overlay module found
	I1119 22:35:02.778856  283427 out.go:179] * Using the docker driver based on existing profile
	I1119 22:35:02.779960  283427 start.go:309] selected driver: docker
	I1119 22:35:02.779974  283427 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:02.780050  283427 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:35:02.780629  283427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:02.838687  283427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:35:02.829074701 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:35:02.839071  283427 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:35:02.839108  283427 cni.go:84] Creating CNI manager for ""
	I1119 22:35:02.839175  283427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:35:02.839214  283427 start.go:353] cluster config:
	{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:02.844217  283427 out.go:179] * Starting "default-k8s-diff-port-409987" primary control-plane node in "default-k8s-diff-port-409987" cluster
	I1119 22:35:02.845242  283427 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:35:02.846248  283427 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:35:02.847262  283427 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:35:02.847293  283427 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:35:02.847291  283427 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:35:02.847302  283427 cache.go:65] Caching tarball of preloaded images
	I1119 22:35:02.847400  283427 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:35:02.847416  283427 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:35:02.847533  283427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:35:02.867513  283427 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:35:02.867540  283427 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:35:02.867558  283427 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:35:02.867590  283427 start.go:360] acquireMachinesLock for default-k8s-diff-port-409987: {Name:mk3691865877e78ad0fe52d2c0e71ee1c1c3699a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:35:02.867662  283427 start.go:364] duration metric: took 42.344µs to acquireMachinesLock for "default-k8s-diff-port-409987"
	I1119 22:35:02.867682  283427 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:35:02.867690  283427 fix.go:54] fixHost starting: 
	I1119 22:35:02.867962  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:02.885640  283427 fix.go:112] recreateIfNeeded on default-k8s-diff-port-409987: state=Stopped err=<nil>
	W1119 22:35:02.885669  283427 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:35:02.056042  280396 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:35:02.169218  280396 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:35:02.389571  280396 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:35:02.631126  280396 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:35:02.901672  280396 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:35:02.901876  280396 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-654834 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:35:03.648885  280396 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:35:03.649073  280396 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-654834 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:35:03.778193  280396 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:35:04.001487  280396 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:35:04.251132  280396 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:35:04.251296  280396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:35:04.562644  280396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:35:04.803077  280396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:35:04.974139  280396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:35:05.145046  280396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:35:05.539305  280396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:35:05.539946  280396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:35:05.545900  280396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:35:02.116948  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:35:02.117305  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:35:02.117371  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:35:02.117425  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:35:02.143228  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:02.143251  229026 cri.go:89] found id: ""
	I1119 22:35:02.143261  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:35:02.143318  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:02.147100  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:35:02.147157  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:35:02.172547  229026 cri.go:89] found id: ""
	I1119 22:35:02.172569  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.172579  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:35:02.172585  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:35:02.172639  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:35:02.197989  229026 cri.go:89] found id: ""
	I1119 22:35:02.198013  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.198023  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:35:02.198031  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:35:02.198082  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:35:02.222637  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:02.222659  229026 cri.go:89] found id: ""
	I1119 22:35:02.222668  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:35:02.222721  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:02.226658  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:35:02.226714  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:35:02.253145  229026 cri.go:89] found id: ""
	I1119 22:35:02.253165  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.253174  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:35:02.253182  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:35:02.253223  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:35:02.279155  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:02.279174  229026 cri.go:89] found id: ""
	I1119 22:35:02.279184  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:35:02.279236  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:02.282884  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:35:02.282940  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:35:02.310700  229026 cri.go:89] found id: ""
	I1119 22:35:02.310719  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.310724  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:35:02.310734  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:35:02.310786  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:35:02.341517  229026 cri.go:89] found id: ""
	I1119 22:35:02.341542  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.341552  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:35:02.341571  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:35:02.341588  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:02.402869  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:35:02.402898  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:02.430031  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:35:02.430063  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:35:02.486039  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:35:02.486063  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:35:02.519788  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:35:02.519865  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:35:02.623338  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:35:02.623372  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:35:02.638723  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:35:02.638751  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:35:02.708032  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:35:02.708051  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:35:02.708063  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:05.248147  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:35:05.248559  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:35:05.248608  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:35:05.248652  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:35:05.277299  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:05.277318  229026 cri.go:89] found id: ""
	I1119 22:35:05.277327  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:35:05.277383  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:05.281163  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:35:05.281241  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:35:05.306466  229026 cri.go:89] found id: ""
	I1119 22:35:05.306488  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.306497  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:35:05.306503  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:35:05.306552  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:35:05.331313  229026 cri.go:89] found id: ""
	I1119 22:35:05.331336  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.331345  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:35:05.331353  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:35:05.331407  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:35:05.356450  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:05.356469  229026 cri.go:89] found id: ""
	I1119 22:35:05.356477  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:35:05.356528  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:05.360095  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:35:05.360150  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:35:05.383638  229026 cri.go:89] found id: ""
	I1119 22:35:05.383656  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.383664  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:35:05.383669  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:35:05.383719  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:35:05.408364  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:05.408382  229026 cri.go:89] found id: ""
	I1119 22:35:05.408389  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:35:05.408432  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:05.412032  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:35:05.412093  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:35:05.435185  229026 cri.go:89] found id: ""
	I1119 22:35:05.435203  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.435209  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:35:05.435214  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:35:05.435249  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:35:05.461840  229026 cri.go:89] found id: ""
	I1119 22:35:05.461864  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.461873  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:35:05.461883  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:35:05.461892  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:35:05.475030  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:35:05.475053  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:35:05.529327  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:35:05.529347  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:35:05.529360  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:05.565174  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:35:05.565203  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:05.625825  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:35:05.625861  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:05.651245  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:35:05.651270  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:35:05.697100  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:35:05.697127  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:35:05.726989  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:35:05.727055  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:35:05.547528  280396 out.go:252]   - Booting up control plane ...
	I1119 22:35:05.547649  280396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:35:05.547754  280396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:35:05.548581  280396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:35:05.567203  280396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:35:05.567319  280396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:35:05.573939  280396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:35:05.574252  280396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:35:05.574315  280396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:35:05.674071  280396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:35:05.674268  280396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:35:06.675790  280396 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001823895s
	I1119 22:35:06.679962  280396 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:35:06.680055  280396 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 22:35:06.680198  280396 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:35:06.680344  280396 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:35:02.887278  283427 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-409987" ...
	I1119 22:35:02.887341  283427 cli_runner.go:164] Run: docker start default-k8s-diff-port-409987
	I1119 22:35:03.172145  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:03.192491  283427 kic.go:430] container "default-k8s-diff-port-409987" state is running.
	I1119 22:35:03.192946  283427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:35:03.210858  283427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:35:03.211040  283427 machine.go:94] provisionDockerMachine start ...
	I1119 22:35:03.211092  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:03.231696  283427 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:03.231979  283427 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 22:35:03.231996  283427 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:35:03.232672  283427 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60904->127.0.0.1:33103: read: connection reset by peer
	I1119 22:35:06.356131  283427 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:35:06.356159  283427 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-409987"
	I1119 22:35:06.356224  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:06.373966  283427 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:06.374243  283427 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 22:35:06.374264  283427 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-409987 && echo "default-k8s-diff-port-409987" | sudo tee /etc/hostname
	I1119 22:35:06.507995  283427 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:35:06.508062  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:06.524980  283427 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:06.525193  283427 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 22:35:06.525211  283427 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-409987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-409987/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-409987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:35:06.652378  283427 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:35:06.652409  283427 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:35:06.652430  283427 ubuntu.go:190] setting up certificates
	I1119 22:35:06.652450  283427 provision.go:84] configureAuth start
	I1119 22:35:06.652514  283427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:35:06.671066  283427 provision.go:143] copyHostCerts
	I1119 22:35:06.671131  283427 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:35:06.671145  283427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:35:06.671219  283427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:35:06.671356  283427 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:35:06.671371  283427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:35:06.671412  283427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:35:06.671511  283427 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:35:06.671522  283427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:35:06.671583  283427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:35:06.671677  283427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-409987 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-409987 localhost minikube]
	I1119 22:35:07.048442  283427 provision.go:177] copyRemoteCerts
	I1119 22:35:07.048516  283427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:35:07.048566  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.068954  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:07.164747  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:35:07.183110  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:35:07.200380  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:35:07.216510  283427 provision.go:87] duration metric: took 564.048515ms to configureAuth
	I1119 22:35:07.216533  283427 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:35:07.216743  283427 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:35:07.216881  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.237072  283427 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:07.237343  283427 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 22:35:07.237376  283427 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:35:07.581091  283427 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:35:07.581117  283427 machine.go:97] duration metric: took 4.370062006s to provisionDockerMachine
	I1119 22:35:07.581132  283427 start.go:293] postStartSetup for "default-k8s-diff-port-409987" (driver="docker")
	I1119 22:35:07.581145  283427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:35:07.581210  283427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:35:07.581280  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.605526  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:07.702754  283427 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:35:07.706744  283427 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:35:07.706775  283427 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:35:07.706787  283427 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:35:07.706851  283427 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:35:07.706971  283427 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:35:07.707100  283427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:35:07.715718  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:35:07.735067  283427 start.go:296] duration metric: took 153.922814ms for postStartSetup
	I1119 22:35:07.735132  283427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:35:07.735187  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.757489  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:07.852981  283427 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:35:07.858397  283427 fix.go:56] duration metric: took 4.990700855s for fixHost
	I1119 22:35:07.858425  283427 start.go:83] releasing machines lock for "default-k8s-diff-port-409987", held for 4.990749599s
	I1119 22:35:07.858501  283427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:35:07.878595  283427 ssh_runner.go:195] Run: cat /version.json
	I1119 22:35:07.878617  283427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:35:07.878646  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.878760  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.901080  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:07.902243  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:08.004508  283427 ssh_runner.go:195] Run: systemctl --version
	I1119 22:35:08.072393  283427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:35:08.112580  283427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:35:08.117380  283427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:35:08.117443  283427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:35:08.125280  283427 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:35:08.125306  283427 start.go:496] detecting cgroup driver to use...
	I1119 22:35:08.125340  283427 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:35:08.125395  283427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:35:08.141186  283427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:35:08.153800  283427 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:35:08.153883  283427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:35:08.169952  283427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:35:08.182516  283427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:35:08.279465  283427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:35:08.408807  283427 docker.go:234] disabling docker service ...
	I1119 22:35:08.408910  283427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:35:08.425007  283427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:35:08.454009  283427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:35:08.568046  283427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:35:08.673486  283427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:35:08.685913  283427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:35:08.700097  283427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:35:08.700156  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.708669  283427 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:35:08.708719  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.716978  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.725961  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.734650  283427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:35:08.742741  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.751505  283427 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.760025  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.769156  283427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:35:08.776914  283427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:35:08.784065  283427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:08.873838  283427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:35:09.017167  283427 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:35:09.017233  283427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:35:09.021023  283427 start.go:564] Will wait 60s for crictl version
	I1119 22:35:09.021085  283427 ssh_runner.go:195] Run: which crictl
	I1119 22:35:09.024396  283427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:35:09.047193  283427 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:35:09.047261  283427 ssh_runner.go:195] Run: crio --version
	I1119 22:35:09.073507  283427 ssh_runner.go:195] Run: crio --version
	I1119 22:35:09.102995  283427 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:35:09.104023  283427 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:35:09.121084  283427 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:35:09.124923  283427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:35:09.134795  283427 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:35:09.134942  283427 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:35:09.134989  283427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:35:09.167642  283427 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:35:09.167663  283427 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:35:09.167713  283427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:35:09.192028  283427 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:35:09.192044  283427 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:35:09.192050  283427 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1119 22:35:09.192161  283427 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-409987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:35:09.192235  283427 ssh_runner.go:195] Run: crio config
	I1119 22:35:09.237020  283427 cni.go:84] Creating CNI manager for ""
	I1119 22:35:09.237041  283427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:35:09.237058  283427 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:35:09.237088  283427 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-409987 NodeName:default-k8s-diff-port-409987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:35:09.237216  283427 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-409987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:35:09.237274  283427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:35:09.245438  283427 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:35:09.245506  283427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:35:09.252703  283427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 22:35:09.264895  283427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:35:09.276559  283427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 22:35:09.288249  283427 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:35:09.291532  283427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:35:09.300500  283427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:09.378771  283427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:35:09.402989  283427 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987 for IP: 192.168.76.2
	I1119 22:35:09.403009  283427 certs.go:195] generating shared ca certs ...
	I1119 22:35:09.403028  283427 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:09.403197  283427 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:35:09.403267  283427 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:35:09.403282  283427 certs.go:257] generating profile certs ...
	I1119 22:35:09.403379  283427 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key
	I1119 22:35:09.403448  283427 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832
	I1119 22:35:09.403502  283427 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key
	I1119 22:35:09.403652  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:35:09.403688  283427 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:35:09.403700  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:35:09.403740  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:35:09.403772  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:35:09.403801  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:35:09.403884  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:35:09.404687  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:35:09.422505  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:35:09.442010  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:35:09.462200  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:35:09.486060  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:35:09.508255  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:35:09.527647  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:35:09.546996  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:35:09.566132  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:35:09.584842  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:35:09.604633  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:35:09.624152  283427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:35:09.637806  283427 ssh_runner.go:195] Run: openssl version
	I1119 22:35:09.644717  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:35:09.653855  283427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:35:09.657977  283427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:35:09.658029  283427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:35:09.702393  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:35:09.711249  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:35:09.720256  283427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:35:09.724448  283427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:35:09.724503  283427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:35:09.769085  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:35:09.778122  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:35:09.787181  283427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:09.791237  283427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:09.791289  283427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:09.838393  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:35:09.848148  283427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:35:09.852408  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:35:09.896186  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:35:09.950337  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:35:09.999338  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:35:10.051660  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:35:10.101480  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:35:10.136661  283427 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:10.136783  283427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:35:10.136868  283427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:35:10.167859  283427 cri.go:89] found id: "b8661f41149ca707f6324ddb0a00c89afc3d7e90a18f14246ef246fcdd15cae8"
	I1119 22:35:10.167884  283427 cri.go:89] found id: "9ea6b371425c23c51f93f2430382d9425eb4c20205a212ba69de8647057e8a75"
	I1119 22:35:10.167897  283427 cri.go:89] found id: "315d176713f54c2fce1e9bd8c79d670c65d2d6d604b46b0d6811484175780e15"
	I1119 22:35:10.167902  283427 cri.go:89] found id: "ad7b6880b1efc4a16d5cf0cecbb8a520d6cbc6b98ff585507d7a21dc7f0b8140"
	I1119 22:35:10.167907  283427 cri.go:89] found id: ""
	I1119 22:35:10.167957  283427 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:35:10.179527  283427 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:35:10Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:35:10.179584  283427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:35:10.189385  283427 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:35:10.189403  283427 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:35:10.189444  283427 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:35:10.197533  283427 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:35:10.198693  283427 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-409987" does not appear in /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:35:10.199478  283427 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-9335/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-409987" cluster setting kubeconfig missing "default-k8s-diff-port-409987" context setting]
	I1119 22:35:10.200652  283427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:10.202690  283427 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:35:10.211340  283427 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 22:35:10.211368  283427 kubeadm.go:602] duration metric: took 21.958944ms to restartPrimaryControlPlane
	I1119 22:35:10.211378  283427 kubeadm.go:403] duration metric: took 74.726586ms to StartCluster
	I1119 22:35:10.211393  283427 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:10.211446  283427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:35:10.213436  283427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:10.213672  283427 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:35:10.213827  283427 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:35:10.213912  283427 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:35:10.213917  283427 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-409987"
	I1119 22:35:10.213936  283427 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-409987"
	W1119 22:35:10.213944  283427 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:35:10.213954  283427 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-409987"
	I1119 22:35:10.213971  283427 host.go:66] Checking if "default-k8s-diff-port-409987" exists ...
	I1119 22:35:10.213977  283427 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-409987"
	W1119 22:35:10.213987  283427 addons.go:248] addon dashboard should already be in state true
	I1119 22:35:10.213987  283427 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-409987"
	I1119 22:35:10.214013  283427 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-409987"
	I1119 22:35:10.214019  283427 host.go:66] Checking if "default-k8s-diff-port-409987" exists ...
	I1119 22:35:10.214335  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:10.214512  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:10.214523  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:10.216567  283427 out.go:179] * Verifying Kubernetes components...
	I1119 22:35:10.217788  283427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:10.243233  283427 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:35:10.243625  283427 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-409987"
	W1119 22:35:10.243645  283427 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:35:10.243673  283427 host.go:66] Checking if "default-k8s-diff-port-409987" exists ...
	I1119 22:35:10.244864  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:10.245784  283427 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:35:10.245784  283427 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:35:08.371914  280396 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.691775378s
	I1119 22:35:08.502170  280396 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.82169761s
	I1119 22:35:10.181636  280396 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501639908s
	I1119 22:35:10.199138  280396 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:35:10.213494  280396 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:35:10.230132  280396 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:35:10.230382  280396 kubeadm.go:319] [mark-control-plane] Marking the node auto-654834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:35:10.246216  280396 kubeadm.go:319] [bootstrap-token] Using token: 32thjv.xvq2u04pt4z9x5mh
	I1119 22:35:08.327967  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:35:08.328355  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:35:08.328408  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:35:08.328464  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:35:08.373665  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:08.373686  229026 cri.go:89] found id: ""
	I1119 22:35:08.373696  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:35:08.373755  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:08.377953  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:35:08.378016  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:35:08.413453  229026 cri.go:89] found id: ""
	I1119 22:35:08.413479  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.413488  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:35:08.413496  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:35:08.413552  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:35:08.447176  229026 cri.go:89] found id: ""
	I1119 22:35:08.447201  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.447211  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:35:08.447219  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:35:08.447277  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:35:08.494002  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:08.494026  229026 cri.go:89] found id: ""
	I1119 22:35:08.494037  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:35:08.494094  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:08.499593  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:35:08.500862  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:35:08.528013  229026 cri.go:89] found id: ""
	I1119 22:35:08.528040  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.528050  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:35:08.528058  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:35:08.528107  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:35:08.560761  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:08.560783  229026 cri.go:89] found id: ""
	I1119 22:35:08.560792  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:35:08.560860  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:08.566006  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:35:08.566074  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:35:08.598595  229026 cri.go:89] found id: ""
	I1119 22:35:08.598623  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.598634  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:35:08.598641  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:35:08.598699  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:35:08.636300  229026 cri.go:89] found id: ""
	I1119 22:35:08.636330  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.636340  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:35:08.636351  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:35:08.636366  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:35:08.740568  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:35:08.740592  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:35:08.755375  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:35:08.755396  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:35:08.824490  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:35:08.824514  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:35:08.824530  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:08.855365  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:35:08.855394  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:08.909962  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:35:08.909992  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:08.937927  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:35:08.937957  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:35:08.983471  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:35:08.983501  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:35:11.513882  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:35:11.514273  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:35:11.514329  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:35:11.514386  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:35:10.247788  280396 out.go:252]   - Configuring RBAC rules ...
	I1119 22:35:10.248567  280396 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:35:10.255669  280396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:35:10.265244  280396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:35:10.271269  280396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:35:10.276831  280396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:35:10.281350  280396 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:35:10.592053  280396 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:35:11.013078  280396 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:35:11.597065  280396 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:35:11.597089  280396 kubeadm.go:319] 
	I1119 22:35:11.597155  280396 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:35:11.597165  280396 kubeadm.go:319] 
	I1119 22:35:11.597249  280396 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:35:11.597260  280396 kubeadm.go:319] 
	I1119 22:35:11.597298  280396 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:35:11.597372  280396 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:35:11.597460  280396 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:35:11.597480  280396 kubeadm.go:319] 
	I1119 22:35:11.597549  280396 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:35:11.597555  280396 kubeadm.go:319] 
	I1119 22:35:11.597621  280396 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:35:11.597626  280396 kubeadm.go:319] 
	I1119 22:35:11.597689  280396 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:35:11.597793  280396 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:35:11.597912  280396 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:35:11.597920  280396 kubeadm.go:319] 
	I1119 22:35:11.598037  280396 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:35:11.598131  280396 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:35:11.598137  280396 kubeadm.go:319] 
	I1119 22:35:11.598497  280396 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 32thjv.xvq2u04pt4z9x5mh \
	I1119 22:35:11.598628  280396 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b \
	I1119 22:35:11.598662  280396 kubeadm.go:319] 	--control-plane 
	I1119 22:35:11.598671  280396 kubeadm.go:319] 
	I1119 22:35:11.598774  280396 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:35:11.598787  280396 kubeadm.go:319] 
	I1119 22:35:11.598901  280396 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 32thjv.xvq2u04pt4z9x5mh \
	I1119 22:35:11.599036  280396 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b 
	I1119 22:35:11.601677  280396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:35:11.601889  280396 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:35:11.601923  280396 cni.go:84] Creating CNI manager for ""
	I1119 22:35:11.601933  280396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:35:11.604158  280396 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:35:11.605244  280396 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:35:11.610160  280396 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:35:11.610180  280396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:35:11.623612  280396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:35:11.899522  280396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:35:11.899624  280396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:35:11.899624  280396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-654834 minikube.k8s.io/updated_at=2025_11_19T22_35_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=auto-654834 minikube.k8s.io/primary=true
	I1119 22:35:11.914941  280396 ops.go:34] apiserver oom_adj: -16
	I1119 22:35:10.246885  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:35:10.246902  283427 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:35:10.246956  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:10.247136  283427 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:35:10.247144  283427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:35:10.247179  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:10.276288  283427 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:35:10.276314  283427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:35:10.276373  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:10.281460  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:10.284573  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:10.303341  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:10.375707  283427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:35:10.390614  283427 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-409987" to be "Ready" ...
	I1119 22:35:10.397475  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:35:10.397498  283427 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:35:10.403603  283427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:35:10.413247  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:35:10.413266  283427 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:35:10.413762  283427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:35:10.427442  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:35:10.427463  283427 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 22:35:10.441721  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:35:10.441741  283427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:35:10.459808  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:35:10.459840  283427 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:35:10.484617  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:35:10.484644  283427 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:35:10.503172  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:35:10.503196  283427 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:35:10.521720  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:35:10.521743  283427 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:35:10.536104  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:35:10.536126  283427 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:35:10.548516  283427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:35:12.067062  283427 node_ready.go:49] node "default-k8s-diff-port-409987" is "Ready"
	I1119 22:35:12.067102  283427 node_ready.go:38] duration metric: took 1.676455425s for node "default-k8s-diff-port-409987" to be "Ready" ...
	I1119 22:35:12.067119  283427 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:35:12.067173  283427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:35:12.643292  283427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.239659162s)
	I1119 22:35:12.643386  283427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.229599349s)
	I1119 22:35:12.643498  283427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.094952696s)
	I1119 22:35:12.643533  283427 api_server.go:72] duration metric: took 2.429824803s to wait for apiserver process to appear ...
	I1119 22:35:12.643549  283427 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:35:12.643569  283427 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:35:12.645197  283427 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-409987 addons enable metrics-server
	
	I1119 22:35:12.648855  283427 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:35:12.648879  283427 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:35:12.653061  283427 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 22:35:12.654280  283427 addons.go:515] duration metric: took 2.440472022s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Nov 19 22:34:28 embed-certs-443380 crio[571]: time="2025-11-19T22:34:28.850794794Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:34:28 embed-certs-443380 crio[571]: time="2025-11-19T22:34:28.85539637Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:34:28 embed-certs-443380 crio[571]: time="2025-11-19T22:34:28.855418911Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.079456769Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ed0ed560-edf4-48ed-adbf-e8a6d99ec60d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.082700754Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6a9f43b7-e17d-45de-90bb-5c53515a37ad name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.086079632Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh/dashboard-metrics-scraper" id=f96aed0a-bcb2-4b13-88eb-597f079faa78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.086207439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.095634514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.096301751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.122230248Z" level=info msg="Created container dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh/dashboard-metrics-scraper" id=f96aed0a-bcb2-4b13-88eb-597f079faa78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.123085833Z" level=info msg="Starting container: dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa" id=9f0d531e-9ab5-4290-9a09-31bdff40fa0c name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.12548669Z" level=info msg="Started container" PID=1798 containerID=dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh/dashboard-metrics-scraper id=9f0d531e-9ab5-4290-9a09-31bdff40fa0c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3485c433277f598574cf9e83ef142fb933eacb6888e4aec85f4cb0c66b95fac
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.174293173Z" level=info msg="Removing container: 7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb" id=dfd6357b-daf1-4c3e-8494-64f441ef5aa3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.184439496Z" level=info msg="Removed container 7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh/dashboard-metrics-scraper" id=dfd6357b-daf1-4c3e-8494-64f441ef5aa3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.193602545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=87106f64-b2e8-4dac-ad72-8fdfad86915d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.194639975Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9b9a5491-5c90-47b0-8439-e5f443a0765e name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.195852728Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=48786fd0-2141-4295-ae76-a52ca2282a8d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.195988642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.200241738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.200424922Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7dc50a7176012be2d15946b268a281c80aaf7310fce87e1ec79951b85c92b59d/merged/etc/passwd: no such file or directory"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.200459131Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7dc50a7176012be2d15946b268a281c80aaf7310fce87e1ec79951b85c92b59d/merged/etc/group: no such file or directory"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.200756735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.227857576Z" level=info msg="Created container e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb: kube-system/storage-provisioner/storage-provisioner" id=48786fd0-2141-4295-ae76-a52ca2282a8d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.228457731Z" level=info msg="Starting container: e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb" id=66d1f0a3-eabe-4eff-82a1-8d1954f36dd9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.230196923Z" level=info msg="Started container" PID=1812 containerID=e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb description=kube-system/storage-provisioner/storage-provisioner id=66d1f0a3-eabe-4eff-82a1-8d1954f36dd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=225f8c19962ed3e6a8eb03a789b66fd4d1fc4e0dbd7bc90cb12e3efce6587d44
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e32255d662828       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   225f8c19962ed       storage-provisioner                          kube-system
	dfcb372f5b750       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   d3485c433277f       dashboard-metrics-scraper-6ffb444bf9-gthdh   kubernetes-dashboard
	f8a07463feed4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   0180970baa651       kubernetes-dashboard-855c9754f9-mmf4r        kubernetes-dashboard
	ccf645b5345f1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   75703fbf5cd40       busybox                                      default
	df257a051a08e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   e485fe7331ca5       coredns-66bc5c9577-jmjmf                     kube-system
	1a691b92d4e51       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   b062bcbd17488       kindnet-gq4x5                                kube-system
	3034e4e70b518       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   225f8c19962ed       storage-provisioner                          kube-system
	4ca72c190c2ac       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   67d91ea5cf499       kube-proxy-r5xtg                             kube-system
	847f5d7dba3ab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   d133a4446c614       etcd-embed-certs-443380                      kube-system
	f2e2adcbdf2ed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   e80546c2d0bed       kube-apiserver-embed-certs-443380            kube-system
	185e753f982bb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   f0c8744c432b8       kube-controller-manager-embed-certs-443380   kube-system
	de4131eab48f0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   6f6e45c082223       kube-scheduler-embed-certs-443380            kube-system
	
	
	==> coredns [df257a051a08e4a48e737e015b9042f67752f43236a8a79e391dd2ec99c2c20c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42920 - 51769 "HINFO IN 5516353465025441952.3947523456498319085. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060988613s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-443380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-443380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-443380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_33_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:33:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-443380
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:35:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:35:08 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:35:08 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:35:08 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:35:08 +0000   Wed, 19 Nov 2025 22:33:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-443380
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                e1eb2e2e-5c81-4978-ae2f-b498e52a3d43
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-jmjmf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-443380                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-gq4x5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-443380             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-443380    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-r5xtg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-443380             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gthdh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mmf4r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node embed-certs-443380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node embed-certs-443380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 119s)  kubelet          Node embed-certs-443380 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     115s                 kubelet          Node embed-certs-443380 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  115s                 kubelet          Node embed-certs-443380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s                 kubelet          Node embed-certs-443380 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s                 node-controller  Node embed-certs-443380 event: Registered Node embed-certs-443380 in Controller
	  Normal  NodeReady                98s                  kubelet          Node embed-certs-443380 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node embed-certs-443380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node embed-certs-443380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node embed-certs-443380 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                  node-controller  Node embed-certs-443380 event: Registered Node embed-certs-443380 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [847f5d7dba3ab17916fecc3496f64e3c432a7aea38029dc58d6ca5c607f49bf4] <==
	{"level":"warn","ts":"2025-11-19T22:34:16.520146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.531123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.537828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.544708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.550360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.555960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.561881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.567422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.573270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.590914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.596579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.602792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.608725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.614654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.620142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.626276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.634040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.641354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.648159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.655051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.661046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.685892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.693300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.702471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.761455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:35:14 up  1:17,  0 user,  load average: 2.14, 2.61, 1.88
	Linux embed-certs-443380 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a691b92d4e519b79315b18ca34f25853a37f7381a8be39393abe3dd2e5fc138] <==
	I1119 22:34:18.628570       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:34:18.628823       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:34:18.628998       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:34:18.629018       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:34:18.629042       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:34:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:34:18.829707       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:34:18.829928       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:34:18.829972       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:34:18.830139       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:34:19.323114       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:34:19.323145       1 metrics.go:72] Registering metrics
	I1119 22:34:19.323239       1 controller.go:711] "Syncing nftables rules"
	I1119 22:34:28.829758       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:34:28.829857       1 main.go:301] handling current node
	I1119 22:34:38.832432       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:34:38.832461       1 main.go:301] handling current node
	I1119 22:34:48.829913       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:34:48.829964       1 main.go:301] handling current node
	I1119 22:34:58.830399       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:34:58.830442       1 main.go:301] handling current node
	I1119 22:35:08.829462       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:35:08.829510       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f2e2adcbdf2ed28a414676c53047f68a57fcf6fb525c42cea338059bedb6224c] <==
	I1119 22:34:17.274606       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:34:17.274647       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:34:17.275435       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:34:17.275490       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:34:17.275517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:34:17.275540       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:34:17.275775       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:34:17.275795       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:34:17.277246       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1119 22:34:17.284507       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:34:17.295077       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:34:17.298072       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:34:17.305408       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:34:17.699046       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:34:17.731602       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:34:17.750497       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:34:17.757734       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:34:17.763554       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:34:17.797563       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.0.96"}
	I1119 22:34:17.806189       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.149.14"}
	I1119 22:34:18.176531       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:34:20.653256       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:34:21.102847       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:34:21.102848       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:34:21.202999       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [185e753f982bb76405831c8b358ebdfd082e42f64259200ff2771e2287ccd2a7] <==
	I1119 22:34:20.601218       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:34:20.601326       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:34:20.601437       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-443380"
	I1119 22:34:20.601487       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:34:20.601540       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:34:20.601611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 22:34:20.601668       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:34:20.602333       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:34:20.602358       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:34:20.602419       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:34:20.602862       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:34:20.606784       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:34:20.609040       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:34:20.611364       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:34:20.612574       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:34:20.613767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:34:20.616914       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:34:20.618023       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:34:20.620318       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:34:20.622545       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:34:20.643129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:20.643150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:34:20.643157       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:34:20.648909       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:20.651063       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [4ca72c190c2ac15c8d89f95f3c61a04b635604352eb3e300c4f8e35cb5f03acd] <==
	I1119 22:34:18.492875       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:34:18.552084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:34:18.652194       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:34:18.652240       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:34:18.652337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:34:18.671003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:34:18.671068       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:34:18.676651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:34:18.677080       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:34:18.677122       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:18.678521       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:34:18.678547       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:34:18.678598       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:34:18.678610       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:34:18.678643       1 config.go:309] "Starting node config controller"
	I1119 22:34:18.678654       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:34:18.678661       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:34:18.678706       1 config.go:200] "Starting service config controller"
	I1119 22:34:18.678747       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:34:18.778795       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:34:18.778841       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:34:18.778862       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [de4131eab48f0dd8d34f317e598532f0311ff6539bf32deb7148043cda0db569] <==
	I1119 22:34:17.250646       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:17.253756       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:34:17.253946       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:34:17.253965       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:34:17.254213       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 22:34:17.267015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:34:17.267125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:34:17.267154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:34:17.267174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:34:17.267191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:34:17.267238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:34:17.267306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:34:17.267317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:34:17.267404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:34:17.267490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:34:17.267537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:34:17.267611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:34:17.267675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:34:17.267740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:34:17.267791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:34:17.267869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:34:17.268845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:34:17.270424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1119 22:34:17.354735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:34:21 embed-certs-443380 kubelet[739]: I1119 22:34:21.371365     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2w8\" (UniqueName: \"kubernetes.io/projected/5d678ef9-cff7-48f6-b954-b87ef278aff0-kube-api-access-5s2w8\") pod \"kubernetes-dashboard-855c9754f9-mmf4r\" (UID: \"5d678ef9-cff7-48f6-b954-b87ef278aff0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmf4r"
	Nov 19 22:34:21 embed-certs-443380 kubelet[739]: I1119 22:34:21.371438     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5d678ef9-cff7-48f6-b954-b87ef278aff0-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mmf4r\" (UID: \"5d678ef9-cff7-48f6-b954-b87ef278aff0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmf4r"
	Nov 19 22:34:24 embed-certs-443380 kubelet[739]: I1119 22:34:24.120528     739 scope.go:117] "RemoveContainer" containerID="03108b518244fe9d713fcb68eb3164f23b52b2044dc89682adeadf101970b0c0"
	Nov 19 22:34:25 embed-certs-443380 kubelet[739]: I1119 22:34:25.125149     739 scope.go:117] "RemoveContainer" containerID="03108b518244fe9d713fcb68eb3164f23b52b2044dc89682adeadf101970b0c0"
	Nov 19 22:34:25 embed-certs-443380 kubelet[739]: I1119 22:34:25.125303     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:25 embed-certs-443380 kubelet[739]: E1119 22:34:25.125519     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:34:26 embed-certs-443380 kubelet[739]: I1119 22:34:26.129499     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:26 embed-certs-443380 kubelet[739]: E1119 22:34:26.129722     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:34:27 embed-certs-443380 kubelet[739]: I1119 22:34:27.594003     739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 22:34:28 embed-certs-443380 kubelet[739]: I1119 22:34:28.393935     739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmf4r" podStartSLOduration=1.921192616 podStartE2EDuration="7.393912951s" podCreationTimestamp="2025-11-19 22:34:21 +0000 UTC" firstStartedPulling="2025-11-19 22:34:21.809069805 +0000 UTC m=+6.821739835" lastFinishedPulling="2025-11-19 22:34:27.281790135 +0000 UTC m=+12.294460170" observedRunningTime="2025-11-19 22:34:28.144686323 +0000 UTC m=+13.157356361" watchObservedRunningTime="2025-11-19 22:34:28.393912951 +0000 UTC m=+13.406582990"
	Nov 19 22:34:31 embed-certs-443380 kubelet[739]: I1119 22:34:31.744111     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:31 embed-certs-443380 kubelet[739]: E1119 22:34:31.744276     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:34:43 embed-certs-443380 kubelet[739]: I1119 22:34:43.078699     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:43 embed-certs-443380 kubelet[739]: I1119 22:34:43.172949     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:43 embed-certs-443380 kubelet[739]: I1119 22:34:43.173186     739 scope.go:117] "RemoveContainer" containerID="dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	Nov 19 22:34:43 embed-certs-443380 kubelet[739]: E1119 22:34:43.173496     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:34:49 embed-certs-443380 kubelet[739]: I1119 22:34:49.193212     739 scope.go:117] "RemoveContainer" containerID="3034e4e70b518b88b7e724642f117283a7941f0f13aabe55e1d1c03789730810"
	Nov 19 22:34:51 embed-certs-443380 kubelet[739]: I1119 22:34:51.744240     739 scope.go:117] "RemoveContainer" containerID="dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	Nov 19 22:34:51 embed-certs-443380 kubelet[739]: E1119 22:34:51.744473     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:35:03 embed-certs-443380 kubelet[739]: I1119 22:35:03.078599     739 scope.go:117] "RemoveContainer" containerID="dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	Nov 19 22:35:03 embed-certs-443380 kubelet[739]: E1119 22:35:03.078947     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:35:11 embed-certs-443380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:35:11 embed-certs-443380 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:35:11 embed-certs-443380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:35:11 embed-certs-443380 systemd[1]: kubelet.service: Consumed 1.625s CPU time.
	
	
	==> kubernetes-dashboard [f8a07463feed4f6a4d7c8e5e4b1d14a47cab1a7fa1ce43c84aba5ba99da95c3f] <==
	2025/11/19 22:34:27 Using namespace: kubernetes-dashboard
	2025/11/19 22:34:27 Using in-cluster config to connect to apiserver
	2025/11/19 22:34:27 Using secret token for csrf signing
	2025/11/19 22:34:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:34:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:34:27 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 22:34:27 Generating JWE encryption key
	2025/11/19 22:34:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:34:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:34:27 Initializing JWE encryption key from synchronized object
	2025/11/19 22:34:27 Creating in-cluster Sidecar client
	2025/11/19 22:34:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:34:27 Serving insecurely on HTTP port: 9090
	2025/11/19 22:34:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:34:27 Starting overwatch
	
	
	==> storage-provisioner [3034e4e70b518b88b7e724642f117283a7941f0f13aabe55e1d1c03789730810] <==
	I1119 22:34:18.450409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:34:48.455263       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb] <==
	I1119 22:34:49.243220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:34:49.251545       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:34:49.251644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:34:49.253570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:52.709332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:56.969427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:00.567214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:03.621322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:06.642993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:06.648161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:35:06.648278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:35:06.648415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"824b733c-0cb0-473e-abb7-ba15ddd82973", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-443380_d06606dc-0d57-4817-b4a4-6d3c29cf0b5d became leader
	I1119 22:35:06.648447       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-443380_d06606dc-0d57-4817-b4a4-6d3c29cf0b5d!
	W1119 22:35:06.650757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:06.653992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:35:06.748723       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-443380_d06606dc-0d57-4817-b4a4-6d3c29cf0b5d!
	W1119 22:35:08.657748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:08.661790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:10.665407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:10.670567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:12.673914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:12.677624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-443380 -n embed-certs-443380
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-443380 -n embed-certs-443380: exit status 2 (321.288721ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-443380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-443380
helpers_test.go:243: (dbg) docker inspect embed-certs-443380:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49",
	        "Created": "2025-11-19T22:33:06.74702883Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269527,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:34:08.144483977Z",
	            "FinishedAt": "2025-11-19T22:34:07.194772047Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/hostname",
	        "HostsPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/hosts",
	        "LogPath": "/var/lib/docker/containers/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49/f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49-json.log",
	        "Name": "/embed-certs-443380",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-443380:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-443380",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f1d90b7b5af6eeccd8825e51addcdf1fe0a2a98c943d2244bd87e5eed8285c49",
	                "LowerDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3c225c57eb8600f9e2c8c3b511484ac3eca82f93258077472e63e86d3473e47/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-443380",
	                "Source": "/var/lib/docker/volumes/embed-certs-443380/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-443380",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-443380",
	                "name.minikube.sigs.k8s.io": "embed-certs-443380",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "45c768282ab68ea0c7a4cde1e8b10df00b9465abdf3bddadfb2aac195203ba32",
	            "SandboxKey": "/var/run/docker/netns/45c768282ab6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-443380": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "79be9ba27c325ef564b730d7c6a14208f6797c8013b71ad28befe3377b076629",
	                    "EndpointID": "98a0e06e8d2b39a8382e2d37248cc7ef5f99817b4e5a6ec715ff230711ef3ea1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "3a:f8:c9:9e:ca:2f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-443380",
	                        "f1d90b7b5af6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443380 -n embed-certs-443380
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443380 -n embed-certs-443380: exit status 2 (313.926141ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-443380 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-443380 logs -n 25: (1.152938899s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ image   │ no-preload-178067 image list --format=json                                                                                                                                                                                                    │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ pause   │ -p no-preload-178067 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-443380 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │                     │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ stop    │ -p embed-certs-443380 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p no-preload-178067                                                                                                                                                                                                                          │ no-preload-178067            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:33 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:33 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-443380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-949690 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p newest-cni-949690 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-949690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-409987 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ image   │ newest-cni-949690 image list --format=json                                                                                                                                                                                                    │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ pause   │ -p newest-cni-949690 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-409987 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p newest-cni-949690                                                                                                                                                                                                                          │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ delete  │ -p newest-cni-949690                                                                                                                                                                                                                          │ newest-cni-949690            │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │ 19 Nov 25 22:34 UTC │
	│ start   │ -p auto-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:34 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-409987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ start   │ -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ image   │ embed-certs-443380 image list --format=json                                                                                                                                                                                                   │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ pause   │ -p embed-certs-443380 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-443380           │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:35:02
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:35:02.669855  283427 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:35:02.669967  283427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:02.669974  283427 out.go:374] Setting ErrFile to fd 2...
	I1119 22:35:02.669980  283427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:02.670219  283427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:35:02.670653  283427 out.go:368] Setting JSON to false
	I1119 22:35:02.671776  283427 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4651,"bootTime":1763587052,"procs":294,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:35:02.671889  283427 start.go:143] virtualization: kvm guest
	I1119 22:35:02.676939  283427 out.go:179] * [default-k8s-diff-port-409987] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:35:02.678186  283427 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:35:02.678186  283427 notify.go:221] Checking for updates...
	I1119 22:35:02.679327  283427 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:35:02.680737  283427 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:35:02.682364  283427 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:35:02.683549  283427 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:35:02.686211  283427 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:35:02.687617  283427 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:35:02.688102  283427 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:35:02.715631  283427 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:35:02.715778  283427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:02.776833  283427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:35:02.766780998 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:35:02.777010  283427 docker.go:319] overlay module found
	I1119 22:35:02.778856  283427 out.go:179] * Using the docker driver based on existing profile
	I1119 22:35:02.779960  283427 start.go:309] selected driver: docker
	I1119 22:35:02.779974  283427 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:02.780050  283427 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:35:02.780629  283427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:02.838687  283427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:35:02.829074701 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:35:02.839071  283427 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:35:02.839108  283427 cni.go:84] Creating CNI manager for ""
	I1119 22:35:02.839175  283427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:35:02.839214  283427 start.go:353] cluster config:
	{Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:02.844217  283427 out.go:179] * Starting "default-k8s-diff-port-409987" primary control-plane node in "default-k8s-diff-port-409987" cluster
	I1119 22:35:02.845242  283427 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:35:02.846248  283427 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:35:02.847262  283427 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:35:02.847293  283427 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:35:02.847291  283427 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:35:02.847302  283427 cache.go:65] Caching tarball of preloaded images
	I1119 22:35:02.847400  283427 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:35:02.847416  283427 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:35:02.847533  283427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:35:02.867513  283427 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:35:02.867540  283427 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:35:02.867558  283427 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:35:02.867590  283427 start.go:360] acquireMachinesLock for default-k8s-diff-port-409987: {Name:mk3691865877e78ad0fe52d2c0e71ee1c1c3699a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:35:02.867662  283427 start.go:364] duration metric: took 42.344µs to acquireMachinesLock for "default-k8s-diff-port-409987"
	I1119 22:35:02.867682  283427 start.go:96] Skipping create...Using existing machine configuration
	I1119 22:35:02.867690  283427 fix.go:54] fixHost starting: 
	I1119 22:35:02.867962  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:02.885640  283427 fix.go:112] recreateIfNeeded on default-k8s-diff-port-409987: state=Stopped err=<nil>
	W1119 22:35:02.885669  283427 fix.go:138] unexpected machine state, will restart: <nil>
	I1119 22:35:02.056042  280396 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1119 22:35:02.169218  280396 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1119 22:35:02.389571  280396 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1119 22:35:02.631126  280396 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1119 22:35:02.901672  280396 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1119 22:35:02.901876  280396 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-654834 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:35:03.648885  280396 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1119 22:35:03.649073  280396 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-654834 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1119 22:35:03.778193  280396 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1119 22:35:04.001487  280396 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1119 22:35:04.251132  280396 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1119 22:35:04.251296  280396 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1119 22:35:04.562644  280396 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1119 22:35:04.803077  280396 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1119 22:35:04.974139  280396 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1119 22:35:05.145046  280396 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1119 22:35:05.539305  280396 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1119 22:35:05.539946  280396 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1119 22:35:05.545900  280396 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1119 22:35:02.116948  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:35:02.117305  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:35:02.117371  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:35:02.117425  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:35:02.143228  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:02.143251  229026 cri.go:89] found id: ""
	I1119 22:35:02.143261  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:35:02.143318  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:02.147100  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:35:02.147157  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:35:02.172547  229026 cri.go:89] found id: ""
	I1119 22:35:02.172569  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.172579  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:35:02.172585  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:35:02.172639  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:35:02.197989  229026 cri.go:89] found id: ""
	I1119 22:35:02.198013  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.198023  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:35:02.198031  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:35:02.198082  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:35:02.222637  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:02.222659  229026 cri.go:89] found id: ""
	I1119 22:35:02.222668  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:35:02.222721  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:02.226658  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:35:02.226714  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:35:02.253145  229026 cri.go:89] found id: ""
	I1119 22:35:02.253165  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.253174  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:35:02.253182  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:35:02.253223  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:35:02.279155  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:02.279174  229026 cri.go:89] found id: ""
	I1119 22:35:02.279184  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:35:02.279236  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:02.282884  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:35:02.282940  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:35:02.310700  229026 cri.go:89] found id: ""
	I1119 22:35:02.310719  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.310724  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:35:02.310734  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:35:02.310786  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:35:02.341517  229026 cri.go:89] found id: ""
	I1119 22:35:02.341542  229026 logs.go:282] 0 containers: []
	W1119 22:35:02.341552  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:35:02.341571  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:35:02.341588  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:02.402869  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:35:02.402898  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:02.430031  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:35:02.430063  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:35:02.486039  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:35:02.486063  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:35:02.519788  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:35:02.519865  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:35:02.623338  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:35:02.623372  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:35:02.638723  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:35:02.638751  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:35:02.708032  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:35:02.708051  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:35:02.708063  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:05.248147  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:35:05.248559  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:35:05.248608  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:35:05.248652  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:35:05.277299  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:05.277318  229026 cri.go:89] found id: ""
	I1119 22:35:05.277327  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:35:05.277383  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:05.281163  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:35:05.281241  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:35:05.306466  229026 cri.go:89] found id: ""
	I1119 22:35:05.306488  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.306497  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:35:05.306503  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:35:05.306552  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:35:05.331313  229026 cri.go:89] found id: ""
	I1119 22:35:05.331336  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.331345  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:35:05.331353  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:35:05.331407  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:35:05.356450  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:05.356469  229026 cri.go:89] found id: ""
	I1119 22:35:05.356477  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:35:05.356528  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:05.360095  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:35:05.360150  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:35:05.383638  229026 cri.go:89] found id: ""
	I1119 22:35:05.383656  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.383664  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:35:05.383669  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:35:05.383719  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:35:05.408364  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:05.408382  229026 cri.go:89] found id: ""
	I1119 22:35:05.408389  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:35:05.408432  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:05.412032  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:35:05.412093  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:35:05.435185  229026 cri.go:89] found id: ""
	I1119 22:35:05.435203  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.435209  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:35:05.435214  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:35:05.435249  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:35:05.461840  229026 cri.go:89] found id: ""
	I1119 22:35:05.461864  229026 logs.go:282] 0 containers: []
	W1119 22:35:05.461873  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:35:05.461883  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:35:05.461892  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:35:05.475030  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:35:05.475053  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:35:05.529327  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:35:05.529347  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:35:05.529360  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:05.565174  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:35:05.565203  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:05.625825  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:35:05.625861  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:05.651245  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:35:05.651270  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:35:05.697100  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:35:05.697127  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:35:05.726989  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:35:05.727055  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:35:05.547528  280396 out.go:252]   - Booting up control plane ...
	I1119 22:35:05.547649  280396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1119 22:35:05.547754  280396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1119 22:35:05.548581  280396 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1119 22:35:05.567203  280396 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1119 22:35:05.567319  280396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1119 22:35:05.573939  280396 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1119 22:35:05.574252  280396 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1119 22:35:05.574315  280396 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1119 22:35:05.674071  280396 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1119 22:35:05.674268  280396 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1119 22:35:06.675790  280396 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001823895s
	I1119 22:35:06.679962  280396 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1119 22:35:06.680055  280396 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1119 22:35:06.680198  280396 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1119 22:35:06.680344  280396 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1119 22:35:02.887278  283427 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-409987" ...
	I1119 22:35:02.887341  283427 cli_runner.go:164] Run: docker start default-k8s-diff-port-409987
	I1119 22:35:03.172145  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:03.192491  283427 kic.go:430] container "default-k8s-diff-port-409987" state is running.
	I1119 22:35:03.192946  283427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:35:03.210858  283427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/config.json ...
	I1119 22:35:03.211040  283427 machine.go:94] provisionDockerMachine start ...
	I1119 22:35:03.211092  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:03.231696  283427 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:03.231979  283427 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 22:35:03.231996  283427 main.go:143] libmachine: About to run SSH command:
	hostname
	I1119 22:35:03.232672  283427 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60904->127.0.0.1:33103: read: connection reset by peer
	I1119 22:35:06.356131  283427 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:35:06.356159  283427 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-409987"
	I1119 22:35:06.356224  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:06.373966  283427 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:06.374243  283427 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 22:35:06.374264  283427 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-409987 && echo "default-k8s-diff-port-409987" | sudo tee /etc/hostname
	I1119 22:35:06.507995  283427 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-409987
	
	I1119 22:35:06.508062  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:06.524980  283427 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:06.525193  283427 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 22:35:06.525211  283427 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-409987' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-409987/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-409987' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1119 22:35:06.652378  283427 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1119 22:35:06.652409  283427 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21918-9335/.minikube CaCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21918-9335/.minikube}
	I1119 22:35:06.652430  283427 ubuntu.go:190] setting up certificates
	I1119 22:35:06.652450  283427 provision.go:84] configureAuth start
	I1119 22:35:06.652514  283427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:35:06.671066  283427 provision.go:143] copyHostCerts
	I1119 22:35:06.671131  283427 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem, removing ...
	I1119 22:35:06.671145  283427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem
	I1119 22:35:06.671219  283427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/ca.pem (1082 bytes)
	I1119 22:35:06.671356  283427 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem, removing ...
	I1119 22:35:06.671371  283427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem
	I1119 22:35:06.671412  283427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/cert.pem (1123 bytes)
	I1119 22:35:06.671511  283427 exec_runner.go:144] found /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem, removing ...
	I1119 22:35:06.671522  283427 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem
	I1119 22:35:06.671583  283427 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21918-9335/.minikube/key.pem (1675 bytes)
	I1119 22:35:06.671677  283427 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-409987 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-409987 localhost minikube]
	I1119 22:35:07.048442  283427 provision.go:177] copyRemoteCerts
	I1119 22:35:07.048516  283427 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1119 22:35:07.048566  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.068954  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:07.164747  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1119 22:35:07.183110  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1119 22:35:07.200380  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1119 22:35:07.216510  283427 provision.go:87] duration metric: took 564.048515ms to configureAuth
	I1119 22:35:07.216533  283427 ubuntu.go:206] setting minikube options for container-runtime
	I1119 22:35:07.216743  283427 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:35:07.216881  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.237072  283427 main.go:143] libmachine: Using SSH client type: native
	I1119 22:35:07.237343  283427 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1119 22:35:07.237376  283427 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1119 22:35:07.581091  283427 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1119 22:35:07.581117  283427 machine.go:97] duration metric: took 4.370062006s to provisionDockerMachine
	I1119 22:35:07.581132  283427 start.go:293] postStartSetup for "default-k8s-diff-port-409987" (driver="docker")
	I1119 22:35:07.581145  283427 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1119 22:35:07.581210  283427 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1119 22:35:07.581280  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.605526  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:07.702754  283427 ssh_runner.go:195] Run: cat /etc/os-release
	I1119 22:35:07.706744  283427 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1119 22:35:07.706775  283427 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1119 22:35:07.706787  283427 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/addons for local assets ...
	I1119 22:35:07.706851  283427 filesync.go:126] Scanning /home/jenkins/minikube-integration/21918-9335/.minikube/files for local assets ...
	I1119 22:35:07.706971  283427 filesync.go:149] local asset: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem -> 128292.pem in /etc/ssl/certs
	I1119 22:35:07.707100  283427 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1119 22:35:07.715718  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:35:07.735067  283427 start.go:296] duration metric: took 153.922814ms for postStartSetup
	I1119 22:35:07.735132  283427 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:35:07.735187  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.757489  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:07.852981  283427 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1119 22:35:07.858397  283427 fix.go:56] duration metric: took 4.990700855s for fixHost
	I1119 22:35:07.858425  283427 start.go:83] releasing machines lock for "default-k8s-diff-port-409987", held for 4.990749599s
	I1119 22:35:07.858501  283427 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-409987
	I1119 22:35:07.878595  283427 ssh_runner.go:195] Run: cat /version.json
	I1119 22:35:07.878617  283427 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1119 22:35:07.878646  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.878760  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:07.901080  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:07.902243  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:08.004508  283427 ssh_runner.go:195] Run: systemctl --version
	I1119 22:35:08.072393  283427 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1119 22:35:08.112580  283427 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1119 22:35:08.117380  283427 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1119 22:35:08.117443  283427 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1119 22:35:08.125280  283427 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1119 22:35:08.125306  283427 start.go:496] detecting cgroup driver to use...
	I1119 22:35:08.125340  283427 detect.go:190] detected "systemd" cgroup driver on host os
	I1119 22:35:08.125395  283427 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1119 22:35:08.141186  283427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1119 22:35:08.153800  283427 docker.go:218] disabling cri-docker service (if available) ...
	I1119 22:35:08.153883  283427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1119 22:35:08.169952  283427 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1119 22:35:08.182516  283427 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1119 22:35:08.279465  283427 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1119 22:35:08.408807  283427 docker.go:234] disabling docker service ...
	I1119 22:35:08.408910  283427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1119 22:35:08.425007  283427 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1119 22:35:08.454009  283427 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1119 22:35:08.568046  283427 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1119 22:35:08.673486  283427 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1119 22:35:08.685913  283427 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1119 22:35:08.700097  283427 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1119 22:35:08.700156  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.708669  283427 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1119 22:35:08.708719  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.716978  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.725961  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.734650  283427 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1119 22:35:08.742741  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.751505  283427 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.760025  283427 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1119 22:35:08.769156  283427 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1119 22:35:08.776914  283427 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1119 22:35:08.784065  283427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:08.873838  283427 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1119 22:35:09.017167  283427 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1119 22:35:09.017233  283427 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1119 22:35:09.021023  283427 start.go:564] Will wait 60s for crictl version
	I1119 22:35:09.021085  283427 ssh_runner.go:195] Run: which crictl
	I1119 22:35:09.024396  283427 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1119 22:35:09.047193  283427 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1119 22:35:09.047261  283427 ssh_runner.go:195] Run: crio --version
	I1119 22:35:09.073507  283427 ssh_runner.go:195] Run: crio --version
	I1119 22:35:09.102995  283427 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1119 22:35:09.104023  283427 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-409987 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1119 22:35:09.121084  283427 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1119 22:35:09.124923  283427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:35:09.134795  283427 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1119 22:35:09.134942  283427 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:35:09.134989  283427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:35:09.167642  283427 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:35:09.167663  283427 crio.go:433] Images already preloaded, skipping extraction
	I1119 22:35:09.167713  283427 ssh_runner.go:195] Run: sudo crictl images --output json
	I1119 22:35:09.192028  283427 crio.go:514] all images are preloaded for cri-o runtime.
	I1119 22:35:09.192044  283427 cache_images.go:86] Images are preloaded, skipping loading
	I1119 22:35:09.192050  283427 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1119 22:35:09.192161  283427 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-409987 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1119 22:35:09.192235  283427 ssh_runner.go:195] Run: crio config
	I1119 22:35:09.237020  283427 cni.go:84] Creating CNI manager for ""
	I1119 22:35:09.237041  283427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:35:09.237058  283427 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1119 22:35:09.237088  283427 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-409987 NodeName:default-k8s-diff-port-409987 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1119 22:35:09.237216  283427 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-409987"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1119 22:35:09.237274  283427 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1119 22:35:09.245438  283427 binaries.go:51] Found k8s binaries, skipping transfer
	I1119 22:35:09.245506  283427 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1119 22:35:09.252703  283427 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1119 22:35:09.264895  283427 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1119 22:35:09.276559  283427 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1119 22:35:09.288249  283427 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1119 22:35:09.291532  283427 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1119 22:35:09.300500  283427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:09.378771  283427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:35:09.402989  283427 certs.go:69] Setting up /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987 for IP: 192.168.76.2
	I1119 22:35:09.403009  283427 certs.go:195] generating shared ca certs ...
	I1119 22:35:09.403028  283427 certs.go:227] acquiring lock for ca certs: {Name:mkd0cae69c6c4f2aa79e7a054ff93d3e97482f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:09.403197  283427 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key
	I1119 22:35:09.403267  283427 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key
	I1119 22:35:09.403282  283427 certs.go:257] generating profile certs ...
	I1119 22:35:09.403379  283427 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/client.key
	I1119 22:35:09.403448  283427 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key.e1aaa832
	I1119 22:35:09.403502  283427 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key
	I1119 22:35:09.403652  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem (1338 bytes)
	W1119 22:35:09.403688  283427 certs.go:480] ignoring /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829_empty.pem, impossibly tiny 0 bytes
	I1119 22:35:09.403700  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca-key.pem (1679 bytes)
	I1119 22:35:09.403740  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/ca.pem (1082 bytes)
	I1119 22:35:09.403772  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/cert.pem (1123 bytes)
	I1119 22:35:09.403801  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/certs/key.pem (1675 bytes)
	I1119 22:35:09.403884  283427 certs.go:484] found cert: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem (1708 bytes)
	I1119 22:35:09.404687  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1119 22:35:09.422505  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1119 22:35:09.442010  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1119 22:35:09.462200  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1119 22:35:09.486060  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1119 22:35:09.508255  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1119 22:35:09.527647  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1119 22:35:09.546996  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/default-k8s-diff-port-409987/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1119 22:35:09.566132  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/ssl/certs/128292.pem --> /usr/share/ca-certificates/128292.pem (1708 bytes)
	I1119 22:35:09.584842  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1119 22:35:09.604633  283427 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21918-9335/.minikube/certs/12829.pem --> /usr/share/ca-certificates/12829.pem (1338 bytes)
	I1119 22:35:09.624152  283427 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1119 22:35:09.637806  283427 ssh_runner.go:195] Run: openssl version
	I1119 22:35:09.644717  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12829.pem && ln -fs /usr/share/ca-certificates/12829.pem /etc/ssl/certs/12829.pem"
	I1119 22:35:09.653855  283427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12829.pem
	I1119 22:35:09.657977  283427 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 19 21:53 /usr/share/ca-certificates/12829.pem
	I1119 22:35:09.658029  283427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12829.pem
	I1119 22:35:09.702393  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12829.pem /etc/ssl/certs/51391683.0"
	I1119 22:35:09.711249  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/128292.pem && ln -fs /usr/share/ca-certificates/128292.pem /etc/ssl/certs/128292.pem"
	I1119 22:35:09.720256  283427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/128292.pem
	I1119 22:35:09.724448  283427 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 19 21:53 /usr/share/ca-certificates/128292.pem
	I1119 22:35:09.724503  283427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/128292.pem
	I1119 22:35:09.769085  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/128292.pem /etc/ssl/certs/3ec20f2e.0"
	I1119 22:35:09.778122  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1119 22:35:09.787181  283427 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:09.791237  283427 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 19 21:47 /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:09.791289  283427 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1119 22:35:09.838393  283427 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1119 22:35:09.848148  283427 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1119 22:35:09.852408  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1119 22:35:09.896186  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1119 22:35:09.950337  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1119 22:35:09.999338  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1119 22:35:10.051660  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1119 22:35:10.101480  283427 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1119 22:35:10.136661  283427 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-409987 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-409987 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:35:10.136783  283427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1119 22:35:10.136868  283427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1119 22:35:10.167859  283427 cri.go:89] found id: "b8661f41149ca707f6324ddb0a00c89afc3d7e90a18f14246ef246fcdd15cae8"
	I1119 22:35:10.167884  283427 cri.go:89] found id: "9ea6b371425c23c51f93f2430382d9425eb4c20205a212ba69de8647057e8a75"
	I1119 22:35:10.167897  283427 cri.go:89] found id: "315d176713f54c2fce1e9bd8c79d670c65d2d6d604b46b0d6811484175780e15"
	I1119 22:35:10.167902  283427 cri.go:89] found id: "ad7b6880b1efc4a16d5cf0cecbb8a520d6cbc6b98ff585507d7a21dc7f0b8140"
	I1119 22:35:10.167907  283427 cri.go:89] found id: ""
	I1119 22:35:10.167957  283427 ssh_runner.go:195] Run: sudo runc list -f json
	W1119 22:35:10.179527  283427 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:35:10Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:35:10.179584  283427 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1119 22:35:10.189385  283427 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1119 22:35:10.189403  283427 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1119 22:35:10.189444  283427 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1119 22:35:10.197533  283427 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:35:10.198693  283427 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-409987" does not appear in /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:35:10.199478  283427 kubeconfig.go:62] /home/jenkins/minikube-integration/21918-9335/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-409987" cluster setting kubeconfig missing "default-k8s-diff-port-409987" context setting]
	I1119 22:35:10.200652  283427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:10.202690  283427 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1119 22:35:10.211340  283427 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1119 22:35:10.211368  283427 kubeadm.go:602] duration metric: took 21.958944ms to restartPrimaryControlPlane
	I1119 22:35:10.211378  283427 kubeadm.go:403] duration metric: took 74.726586ms to StartCluster
	I1119 22:35:10.211393  283427 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:10.211446  283427 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:35:10.213436  283427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:35:10.213672  283427 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:35:10.213827  283427 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:35:10.213912  283427 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:35:10.213917  283427 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-409987"
	I1119 22:35:10.213936  283427 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-409987"
	W1119 22:35:10.213944  283427 addons.go:248] addon storage-provisioner should already be in state true
	I1119 22:35:10.213954  283427 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-409987"
	I1119 22:35:10.213971  283427 host.go:66] Checking if "default-k8s-diff-port-409987" exists ...
	I1119 22:35:10.213977  283427 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-409987"
	W1119 22:35:10.213987  283427 addons.go:248] addon dashboard should already be in state true
	I1119 22:35:10.213987  283427 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-409987"
	I1119 22:35:10.214013  283427 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-409987"
	I1119 22:35:10.214019  283427 host.go:66] Checking if "default-k8s-diff-port-409987" exists ...
	I1119 22:35:10.214335  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:10.214512  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:10.214523  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:10.216567  283427 out.go:179] * Verifying Kubernetes components...
	I1119 22:35:10.217788  283427 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:35:10.243233  283427 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1119 22:35:10.243625  283427 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-409987"
	W1119 22:35:10.243645  283427 addons.go:248] addon default-storageclass should already be in state true
	I1119 22:35:10.243673  283427 host.go:66] Checking if "default-k8s-diff-port-409987" exists ...
	I1119 22:35:10.244864  283427 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:10.245784  283427 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:35:10.245784  283427 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1119 22:35:08.371914  280396 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.691775378s
	I1119 22:35:08.502170  280396 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.82169761s
	I1119 22:35:10.181636  280396 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501639908s
	I1119 22:35:10.199138  280396 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1119 22:35:10.213494  280396 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1119 22:35:10.230132  280396 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1119 22:35:10.230382  280396 kubeadm.go:319] [mark-control-plane] Marking the node auto-654834 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1119 22:35:10.246216  280396 kubeadm.go:319] [bootstrap-token] Using token: 32thjv.xvq2u04pt4z9x5mh
	I1119 22:35:08.327967  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:35:08.328355  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:35:08.328408  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:35:08.328464  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:35:08.373665  229026 cri.go:89] found id: "3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:08.373686  229026 cri.go:89] found id: ""
	I1119 22:35:08.373696  229026 logs.go:282] 1 containers: [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7]
	I1119 22:35:08.373755  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:08.377953  229026 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1119 22:35:08.378016  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1119 22:35:08.413453  229026 cri.go:89] found id: ""
	I1119 22:35:08.413479  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.413488  229026 logs.go:284] No container was found matching "etcd"
	I1119 22:35:08.413496  229026 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1119 22:35:08.413552  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1119 22:35:08.447176  229026 cri.go:89] found id: ""
	I1119 22:35:08.447201  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.447211  229026 logs.go:284] No container was found matching "coredns"
	I1119 22:35:08.447219  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1119 22:35:08.447277  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1119 22:35:08.494002  229026 cri.go:89] found id: "27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:08.494026  229026 cri.go:89] found id: ""
	I1119 22:35:08.494037  229026 logs.go:282] 1 containers: [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd]
	I1119 22:35:08.494094  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:08.499593  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1119 22:35:08.500862  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1119 22:35:08.528013  229026 cri.go:89] found id: ""
	I1119 22:35:08.528040  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.528050  229026 logs.go:284] No container was found matching "kube-proxy"
	I1119 22:35:08.528058  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1119 22:35:08.528107  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1119 22:35:08.560761  229026 cri.go:89] found id: "46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:08.560783  229026 cri.go:89] found id: ""
	I1119 22:35:08.560792  229026 logs.go:282] 1 containers: [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c]
	I1119 22:35:08.560860  229026 ssh_runner.go:195] Run: which crictl
	I1119 22:35:08.566006  229026 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1119 22:35:08.566074  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1119 22:35:08.598595  229026 cri.go:89] found id: ""
	I1119 22:35:08.598623  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.598634  229026 logs.go:284] No container was found matching "kindnet"
	I1119 22:35:08.598641  229026 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1119 22:35:08.598699  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1119 22:35:08.636300  229026 cri.go:89] found id: ""
	I1119 22:35:08.636330  229026 logs.go:282] 0 containers: []
	W1119 22:35:08.636340  229026 logs.go:284] No container was found matching "storage-provisioner"
	I1119 22:35:08.636351  229026 logs.go:123] Gathering logs for kubelet ...
	I1119 22:35:08.636366  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1119 22:35:08.740568  229026 logs.go:123] Gathering logs for dmesg ...
	I1119 22:35:08.740592  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1119 22:35:08.755375  229026 logs.go:123] Gathering logs for describe nodes ...
	I1119 22:35:08.755396  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1119 22:35:08.824490  229026 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1119 22:35:08.824514  229026 logs.go:123] Gathering logs for kube-apiserver [3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7] ...
	I1119 22:35:08.824530  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d2aa16468fd460a49052aaca6438f1d711f50dc65b3582e7fd78d7948a96be7"
	I1119 22:35:08.855365  229026 logs.go:123] Gathering logs for kube-scheduler [27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd] ...
	I1119 22:35:08.855394  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 27b54023cda3898ab5d93ee33523d206aab891ab7f0d43008a17fd8951c9fdcd"
	I1119 22:35:08.909962  229026 logs.go:123] Gathering logs for kube-controller-manager [46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c] ...
	I1119 22:35:08.909992  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 46c7e0aef04ce8fa15959046cb62a0281a740ed1f2a4ebcfd695e590fcae263c"
	I1119 22:35:08.937927  229026 logs.go:123] Gathering logs for CRI-O ...
	I1119 22:35:08.937957  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1119 22:35:08.983471  229026 logs.go:123] Gathering logs for container status ...
	I1119 22:35:08.983501  229026 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1119 22:35:11.513882  229026 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1119 22:35:11.514273  229026 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I1119 22:35:11.514329  229026 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1119 22:35:11.514386  229026 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1119 22:35:10.247788  280396 out.go:252]   - Configuring RBAC rules ...
	I1119 22:35:10.248567  280396 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1119 22:35:10.255669  280396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1119 22:35:10.265244  280396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1119 22:35:10.271269  280396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1119 22:35:10.276831  280396 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1119 22:35:10.281350  280396 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1119 22:35:10.592053  280396 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1119 22:35:11.013078  280396 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1119 22:35:11.597065  280396 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1119 22:35:11.597089  280396 kubeadm.go:319] 
	I1119 22:35:11.597155  280396 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1119 22:35:11.597165  280396 kubeadm.go:319] 
	I1119 22:35:11.597249  280396 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1119 22:35:11.597260  280396 kubeadm.go:319] 
	I1119 22:35:11.597298  280396 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1119 22:35:11.597372  280396 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1119 22:35:11.597460  280396 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1119 22:35:11.597480  280396 kubeadm.go:319] 
	I1119 22:35:11.597549  280396 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1119 22:35:11.597555  280396 kubeadm.go:319] 
	I1119 22:35:11.597621  280396 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1119 22:35:11.597626  280396 kubeadm.go:319] 
	I1119 22:35:11.597689  280396 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1119 22:35:11.597793  280396 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1119 22:35:11.597912  280396 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1119 22:35:11.597920  280396 kubeadm.go:319] 
	I1119 22:35:11.598037  280396 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1119 22:35:11.598131  280396 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1119 22:35:11.598137  280396 kubeadm.go:319] 
	I1119 22:35:11.598497  280396 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 32thjv.xvq2u04pt4z9x5mh \
	I1119 22:35:11.598628  280396 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b \
	I1119 22:35:11.598662  280396 kubeadm.go:319] 	--control-plane 
	I1119 22:35:11.598671  280396 kubeadm.go:319] 
	I1119 22:35:11.598774  280396 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1119 22:35:11.598787  280396 kubeadm.go:319] 
	I1119 22:35:11.598901  280396 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 32thjv.xvq2u04pt4z9x5mh \
	I1119 22:35:11.599036  280396 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:2439dd704c62a4df80b7df703c24f4c6179b30bcadb53b83f39ee9c04d2ad79b 
	I1119 22:35:11.601677  280396 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1119 22:35:11.601889  280396 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1119 22:35:11.601923  280396 cni.go:84] Creating CNI manager for ""
	I1119 22:35:11.601933  280396 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 22:35:11.604158  280396 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1119 22:35:11.605244  280396 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1119 22:35:11.610160  280396 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1119 22:35:11.610180  280396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1119 22:35:11.623612  280396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1119 22:35:11.899522  280396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1119 22:35:11.899624  280396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:35:11.899624  280396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-654834 minikube.k8s.io/updated_at=2025_11_19T22_35_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58 minikube.k8s.io/name=auto-654834 minikube.k8s.io/primary=true
	I1119 22:35:11.914941  280396 ops.go:34] apiserver oom_adj: -16
	I1119 22:35:10.246885  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1119 22:35:10.246902  283427 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1119 22:35:10.246956  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:10.247136  283427 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:35:10.247144  283427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:35:10.247179  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:10.276288  283427 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:35:10.276314  283427 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:35:10.276373  283427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:10.281460  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:10.284573  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:10.303341  283427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:10.375707  283427 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:35:10.390614  283427 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-409987" to be "Ready" ...
	I1119 22:35:10.397475  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1119 22:35:10.397498  283427 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1119 22:35:10.403603  283427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:35:10.413247  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1119 22:35:10.413266  283427 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1119 22:35:10.413762  283427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:35:10.427442  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1119 22:35:10.427463  283427 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1119 22:35:10.441721  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1119 22:35:10.441741  283427 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1119 22:35:10.459808  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1119 22:35:10.459840  283427 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1119 22:35:10.484617  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1119 22:35:10.484644  283427 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1119 22:35:10.503172  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1119 22:35:10.503196  283427 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1119 22:35:10.521720  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1119 22:35:10.521743  283427 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1119 22:35:10.536104  283427 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:35:10.536126  283427 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1119 22:35:10.548516  283427 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1119 22:35:12.067062  283427 node_ready.go:49] node "default-k8s-diff-port-409987" is "Ready"
	I1119 22:35:12.067102  283427 node_ready.go:38] duration metric: took 1.676455425s for node "default-k8s-diff-port-409987" to be "Ready" ...
	I1119 22:35:12.067119  283427 api_server.go:52] waiting for apiserver process to appear ...
	I1119 22:35:12.067173  283427 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:35:12.643292  283427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.239659162s)
	I1119 22:35:12.643386  283427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.229599349s)
	I1119 22:35:12.643498  283427 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.094952696s)
	I1119 22:35:12.643533  283427 api_server.go:72] duration metric: took 2.429824803s to wait for apiserver process to appear ...
	I1119 22:35:12.643549  283427 api_server.go:88] waiting for apiserver healthz status ...
	I1119 22:35:12.643569  283427 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1119 22:35:12.645197  283427 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-409987 addons enable metrics-server
	
	I1119 22:35:12.648855  283427 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1119 22:35:12.648879  283427 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1119 22:35:12.653061  283427 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1119 22:35:12.654280  283427 addons.go:515] duration metric: took 2.440472022s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Nov 19 22:34:28 embed-certs-443380 crio[571]: time="2025-11-19T22:34:28.850794794Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 19 22:34:28 embed-certs-443380 crio[571]: time="2025-11-19T22:34:28.85539637Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 19 22:34:28 embed-certs-443380 crio[571]: time="2025-11-19T22:34:28.855418911Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.079456769Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ed0ed560-edf4-48ed-adbf-e8a6d99ec60d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.082700754Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6a9f43b7-e17d-45de-90bb-5c53515a37ad name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.086079632Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh/dashboard-metrics-scraper" id=f96aed0a-bcb2-4b13-88eb-597f079faa78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.086207439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.095634514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.096301751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.122230248Z" level=info msg="Created container dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh/dashboard-metrics-scraper" id=f96aed0a-bcb2-4b13-88eb-597f079faa78 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.123085833Z" level=info msg="Starting container: dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa" id=9f0d531e-9ab5-4290-9a09-31bdff40fa0c name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.12548669Z" level=info msg="Started container" PID=1798 containerID=dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh/dashboard-metrics-scraper id=9f0d531e-9ab5-4290-9a09-31bdff40fa0c name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3485c433277f598574cf9e83ef142fb933eacb6888e4aec85f4cb0c66b95fac
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.174293173Z" level=info msg="Removing container: 7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb" id=dfd6357b-daf1-4c3e-8494-64f441ef5aa3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:34:43 embed-certs-443380 crio[571]: time="2025-11-19T22:34:43.184439496Z" level=info msg="Removed container 7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh/dashboard-metrics-scraper" id=dfd6357b-daf1-4c3e-8494-64f441ef5aa3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.193602545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=87106f64-b2e8-4dac-ad72-8fdfad86915d name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.194639975Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9b9a5491-5c90-47b0-8439-e5f443a0765e name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.195852728Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=48786fd0-2141-4295-ae76-a52ca2282a8d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.195988642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.200241738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.200424922Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7dc50a7176012be2d15946b268a281c80aaf7310fce87e1ec79951b85c92b59d/merged/etc/passwd: no such file or directory"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.200459131Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7dc50a7176012be2d15946b268a281c80aaf7310fce87e1ec79951b85c92b59d/merged/etc/group: no such file or directory"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.200756735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.227857576Z" level=info msg="Created container e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb: kube-system/storage-provisioner/storage-provisioner" id=48786fd0-2141-4295-ae76-a52ca2282a8d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.228457731Z" level=info msg="Starting container: e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb" id=66d1f0a3-eabe-4eff-82a1-8d1954f36dd9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:34:49 embed-certs-443380 crio[571]: time="2025-11-19T22:34:49.230196923Z" level=info msg="Started container" PID=1812 containerID=e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb description=kube-system/storage-provisioner/storage-provisioner id=66d1f0a3-eabe-4eff-82a1-8d1954f36dd9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=225f8c19962ed3e6a8eb03a789b66fd4d1fc4e0dbd7bc90cb12e3efce6587d44
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	e32255d662828       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   225f8c19962ed       storage-provisioner                          kube-system
	dfcb372f5b750       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   d3485c433277f       dashboard-metrics-scraper-6ffb444bf9-gthdh   kubernetes-dashboard
	f8a07463feed4       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   0180970baa651       kubernetes-dashboard-855c9754f9-mmf4r        kubernetes-dashboard
	ccf645b5345f1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   75703fbf5cd40       busybox                                      default
	df257a051a08e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   e485fe7331ca5       coredns-66bc5c9577-jmjmf                     kube-system
	1a691b92d4e51       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   b062bcbd17488       kindnet-gq4x5                                kube-system
	3034e4e70b518       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   225f8c19962ed       storage-provisioner                          kube-system
	4ca72c190c2ac       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   67d91ea5cf499       kube-proxy-r5xtg                             kube-system
	847f5d7dba3ab       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   d133a4446c614       etcd-embed-certs-443380                      kube-system
	f2e2adcbdf2ed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   e80546c2d0bed       kube-apiserver-embed-certs-443380            kube-system
	185e753f982bb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   f0c8744c432b8       kube-controller-manager-embed-certs-443380   kube-system
	de4131eab48f0       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   6f6e45c082223       kube-scheduler-embed-certs-443380            kube-system
	
	
	==> coredns [df257a051a08e4a48e737e015b9042f67752f43236a8a79e391dd2ec99c2c20c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42920 - 51769 "HINFO IN 5516353465025441952.3947523456498319085. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060988613s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-443380
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-443380
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=embed-certs-443380
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_33_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:33:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-443380
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:35:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:35:08 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:35:08 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:35:08 +0000   Wed, 19 Nov 2025 22:33:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:35:08 +0000   Wed, 19 Nov 2025 22:33:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-443380
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                e1eb2e2e-5c81-4978-ae2f-b498e52a3d43
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-jmjmf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-embed-certs-443380                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-gq4x5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-embed-certs-443380             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-embed-certs-443380    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-r5xtg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-embed-certs-443380             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-gthdh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mmf4r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node embed-certs-443380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node embed-certs-443380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node embed-certs-443380 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     116s               kubelet          Node embed-certs-443380 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node embed-certs-443380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node embed-certs-443380 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node embed-certs-443380 event: Registered Node embed-certs-443380 in Controller
	  Normal  NodeReady                99s                kubelet          Node embed-certs-443380 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-443380 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-443380 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-443380 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node embed-certs-443380 event: Registered Node embed-certs-443380 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [847f5d7dba3ab17916fecc3496f64e3c432a7aea38029dc58d6ca5c607f49bf4] <==
	{"level":"warn","ts":"2025-11-19T22:34:16.520146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.531123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.537828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.544708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.550360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.555960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.561881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.567422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.573270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.590914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.596579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.602792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.608725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.614654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.620142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.626276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.634040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.641354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.648159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.655051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.661046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.685892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.693300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.702471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:34:16.761455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45114","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 22:35:16 up  1:17,  0 user,  load average: 2.14, 2.61, 1.88
	Linux embed-certs-443380 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a691b92d4e519b79315b18ca34f25853a37f7381a8be39393abe3dd2e5fc138] <==
	I1119 22:34:18.628570       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:34:18.628823       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1119 22:34:18.628998       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:34:18.629018       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:34:18.629042       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:34:18Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:34:18.829707       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:34:18.829928       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:34:18.829972       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:34:18.830139       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:34:19.323114       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:34:19.323145       1 metrics.go:72] Registering metrics
	I1119 22:34:19.323239       1 controller.go:711] "Syncing nftables rules"
	I1119 22:34:28.829758       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:34:28.829857       1 main.go:301] handling current node
	I1119 22:34:38.832432       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:34:38.832461       1 main.go:301] handling current node
	I1119 22:34:48.829913       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:34:48.829964       1 main.go:301] handling current node
	I1119 22:34:58.830399       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:34:58.830442       1 main.go:301] handling current node
	I1119 22:35:08.829462       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1119 22:35:08.829510       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f2e2adcbdf2ed28a414676c53047f68a57fcf6fb525c42cea338059bedb6224c] <==
	I1119 22:34:17.274606       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:34:17.274647       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:34:17.275435       1 aggregator.go:171] initial CRD sync complete...
	I1119 22:34:17.275490       1 autoregister_controller.go:144] Starting autoregister controller
	I1119 22:34:17.275517       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1119 22:34:17.275540       1 cache.go:39] Caches are synced for autoregister controller
	I1119 22:34:17.275775       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:34:17.275795       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1119 22:34:17.277246       1 cache.go:39] Caches are synced for RemoteAvailability controller
	E1119 22:34:17.284507       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1119 22:34:17.295077       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:34:17.298072       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:34:17.305408       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:34:17.699046       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:34:17.731602       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:34:17.750497       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:34:17.757734       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:34:17.763554       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:34:17.797563       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.0.96"}
	I1119 22:34:17.806189       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.149.14"}
	I1119 22:34:18.176531       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:34:20.653256       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1119 22:34:21.102847       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:34:21.102848       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:34:21.202999       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [185e753f982bb76405831c8b358ebdfd082e42f64259200ff2771e2287ccd2a7] <==
	I1119 22:34:20.601218       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1119 22:34:20.601326       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:34:20.601437       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-443380"
	I1119 22:34:20.601487       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:34:20.601540       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:34:20.601611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1119 22:34:20.601668       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1119 22:34:20.602333       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1119 22:34:20.602358       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1119 22:34:20.602419       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1119 22:34:20.602862       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1119 22:34:20.606784       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:34:20.609040       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1119 22:34:20.611364       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:34:20.612574       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1119 22:34:20.613767       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:34:20.616914       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:34:20.618023       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:34:20.620318       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1119 22:34:20.622545       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1119 22:34:20.643129       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:20.643150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:34:20.643157       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:34:20.648909       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:34:20.651063       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [4ca72c190c2ac15c8d89f95f3c61a04b635604352eb3e300c4f8e35cb5f03acd] <==
	I1119 22:34:18.492875       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:34:18.552084       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:34:18.652194       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:34:18.652240       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1119 22:34:18.652337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:34:18.671003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:34:18.671068       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:34:18.676651       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:34:18.677080       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:34:18.677122       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:18.678521       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:34:18.678547       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:34:18.678598       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:34:18.678610       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:34:18.678643       1 config.go:309] "Starting node config controller"
	I1119 22:34:18.678654       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:34:18.678661       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:34:18.678706       1 config.go:200] "Starting service config controller"
	I1119 22:34:18.678747       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:34:18.778795       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1119 22:34:18.778841       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:34:18.778862       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [de4131eab48f0dd8d34f317e598532f0311ff6539bf32deb7148043cda0db569] <==
	I1119 22:34:17.250646       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:34:17.253756       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:34:17.253946       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:34:17.253965       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:34:17.254213       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1119 22:34:17.267015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1119 22:34:17.267125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1119 22:34:17.267154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1119 22:34:17.267174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1119 22:34:17.267191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1119 22:34:17.267238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1119 22:34:17.267306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1119 22:34:17.267317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1119 22:34:17.267404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1119 22:34:17.267490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1119 22:34:17.267537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1119 22:34:17.267611       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1119 22:34:17.267675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1119 22:34:17.267740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1119 22:34:17.267791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1119 22:34:17.267869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1119 22:34:17.268845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1119 22:34:17.270424       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1119 22:34:17.354735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:34:21 embed-certs-443380 kubelet[739]: I1119 22:34:21.371365     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2w8\" (UniqueName: \"kubernetes.io/projected/5d678ef9-cff7-48f6-b954-b87ef278aff0-kube-api-access-5s2w8\") pod \"kubernetes-dashboard-855c9754f9-mmf4r\" (UID: \"5d678ef9-cff7-48f6-b954-b87ef278aff0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmf4r"
	Nov 19 22:34:21 embed-certs-443380 kubelet[739]: I1119 22:34:21.371438     739 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5d678ef9-cff7-48f6-b954-b87ef278aff0-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mmf4r\" (UID: \"5d678ef9-cff7-48f6-b954-b87ef278aff0\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmf4r"
	Nov 19 22:34:24 embed-certs-443380 kubelet[739]: I1119 22:34:24.120528     739 scope.go:117] "RemoveContainer" containerID="03108b518244fe9d713fcb68eb3164f23b52b2044dc89682adeadf101970b0c0"
	Nov 19 22:34:25 embed-certs-443380 kubelet[739]: I1119 22:34:25.125149     739 scope.go:117] "RemoveContainer" containerID="03108b518244fe9d713fcb68eb3164f23b52b2044dc89682adeadf101970b0c0"
	Nov 19 22:34:25 embed-certs-443380 kubelet[739]: I1119 22:34:25.125303     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:25 embed-certs-443380 kubelet[739]: E1119 22:34:25.125519     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:34:26 embed-certs-443380 kubelet[739]: I1119 22:34:26.129499     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:26 embed-certs-443380 kubelet[739]: E1119 22:34:26.129722     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:34:27 embed-certs-443380 kubelet[739]: I1119 22:34:27.594003     739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 22:34:28 embed-certs-443380 kubelet[739]: I1119 22:34:28.393935     739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mmf4r" podStartSLOduration=1.921192616 podStartE2EDuration="7.393912951s" podCreationTimestamp="2025-11-19 22:34:21 +0000 UTC" firstStartedPulling="2025-11-19 22:34:21.809069805 +0000 UTC m=+6.821739835" lastFinishedPulling="2025-11-19 22:34:27.281790135 +0000 UTC m=+12.294460170" observedRunningTime="2025-11-19 22:34:28.144686323 +0000 UTC m=+13.157356361" watchObservedRunningTime="2025-11-19 22:34:28.393912951 +0000 UTC m=+13.406582990"
	Nov 19 22:34:31 embed-certs-443380 kubelet[739]: I1119 22:34:31.744111     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:31 embed-certs-443380 kubelet[739]: E1119 22:34:31.744276     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:34:43 embed-certs-443380 kubelet[739]: I1119 22:34:43.078699     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:43 embed-certs-443380 kubelet[739]: I1119 22:34:43.172949     739 scope.go:117] "RemoveContainer" containerID="7368c7e798d88b517a5dabad0dc63c6f613b00d88c3aee1bffe45d23461c9fdb"
	Nov 19 22:34:43 embed-certs-443380 kubelet[739]: I1119 22:34:43.173186     739 scope.go:117] "RemoveContainer" containerID="dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	Nov 19 22:34:43 embed-certs-443380 kubelet[739]: E1119 22:34:43.173496     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:34:49 embed-certs-443380 kubelet[739]: I1119 22:34:49.193212     739 scope.go:117] "RemoveContainer" containerID="3034e4e70b518b88b7e724642f117283a7941f0f13aabe55e1d1c03789730810"
	Nov 19 22:34:51 embed-certs-443380 kubelet[739]: I1119 22:34:51.744240     739 scope.go:117] "RemoveContainer" containerID="dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	Nov 19 22:34:51 embed-certs-443380 kubelet[739]: E1119 22:34:51.744473     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:35:03 embed-certs-443380 kubelet[739]: I1119 22:35:03.078599     739 scope.go:117] "RemoveContainer" containerID="dfcb372f5b7507f433e4d6cc7f8e7a1f651e6d598fb59005a1af625944fea2aa"
	Nov 19 22:35:03 embed-certs-443380 kubelet[739]: E1119 22:35:03.078947     739 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-gthdh_kubernetes-dashboard(bfdd4f0c-3777-4106-a45b-97fae9e0d71f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-gthdh" podUID="bfdd4f0c-3777-4106-a45b-97fae9e0d71f"
	Nov 19 22:35:11 embed-certs-443380 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:35:11 embed-certs-443380 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:35:11 embed-certs-443380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:35:11 embed-certs-443380 systemd[1]: kubelet.service: Consumed 1.625s CPU time.
	
	
	==> kubernetes-dashboard [f8a07463feed4f6a4d7c8e5e4b1d14a47cab1a7fa1ce43c84aba5ba99da95c3f] <==
	2025/11/19 22:34:27 Using namespace: kubernetes-dashboard
	2025/11/19 22:34:27 Using in-cluster config to connect to apiserver
	2025/11/19 22:34:27 Using secret token for csrf signing
	2025/11/19 22:34:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:34:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:34:27 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 22:34:27 Generating JWE encryption key
	2025/11/19 22:34:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:34:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:34:27 Initializing JWE encryption key from synchronized object
	2025/11/19 22:34:27 Creating in-cluster Sidecar client
	2025/11/19 22:34:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:34:27 Serving insecurely on HTTP port: 9090
	2025/11/19 22:34:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:34:27 Starting overwatch
	
	
	==> storage-provisioner [3034e4e70b518b88b7e724642f117283a7941f0f13aabe55e1d1c03789730810] <==
	I1119 22:34:18.450409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:34:48.455263       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e32255d6628287964aaa1c81ffd5cb354d7246207c8bcc2ec09d6d648898b1bb] <==
	I1119 22:34:49.243220       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:34:49.251545       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:34:49.251644       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:34:49.253570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:52.709332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:34:56.969427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:00.567214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:03.621322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:06.642993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:06.648161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:35:06.648278       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:35:06.648415       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"824b733c-0cb0-473e-abb7-ba15ddd82973", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-443380_d06606dc-0d57-4817-b4a4-6d3c29cf0b5d became leader
	I1119 22:35:06.648447       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-443380_d06606dc-0d57-4817-b4a4-6d3c29cf0b5d!
	W1119 22:35:06.650757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:06.653992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:35:06.748723       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-443380_d06606dc-0d57-4817-b4a4-6d3c29cf0b5d!
	W1119 22:35:08.657748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:08.661790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:10.665407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:10.670567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:12.673914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:12.677624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:14.680137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:14.684090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-443380 -n embed-certs-443380
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-443380 -n embed-certs-443380: exit status 2 (408.364675ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-443380 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-409987 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-409987 --alsologtostderr -v=1: exit status 80 (1.831249253s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-409987 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:35:58.378249  300816 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:35:58.378486  300816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:58.378497  300816 out.go:374] Setting ErrFile to fd 2...
	I1119 22:35:58.378504  300816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:35:58.378792  300816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:35:58.379036  300816 out.go:368] Setting JSON to false
	I1119 22:35:58.379089  300816 mustload.go:66] Loading cluster: default-k8s-diff-port-409987
	I1119 22:35:58.379402  300816 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:35:58.379865  300816 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-409987 --format={{.State.Status}}
	I1119 22:35:58.400510  300816 host.go:66] Checking if "default-k8s-diff-port-409987" exists ...
	I1119 22:35:58.400885  300816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:35:58.457759  300816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-19 22:35:58.448099575 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:35:58.458356  300816 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763575914-21918/minikube-v1.37.0-1763575914-21918-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763575914-21918-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-409987 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1119 22:35:58.460092  300816 out.go:179] * Pausing node default-k8s-diff-port-409987 ... 
	I1119 22:35:58.461217  300816 host.go:66] Checking if "default-k8s-diff-port-409987" exists ...
	I1119 22:35:58.461463  300816 ssh_runner.go:195] Run: systemctl --version
	I1119 22:35:58.461497  300816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-409987
	I1119 22:35:58.479119  300816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/default-k8s-diff-port-409987/id_rsa Username:docker}
	I1119 22:35:58.571879  300816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:35:58.584106  300816 pause.go:52] kubelet running: true
	I1119 22:35:58.584166  300816 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:35:58.733136  300816 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:35:58.733234  300816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:35:58.799039  300816 cri.go:89] found id: "507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b"
	I1119 22:35:58.799066  300816 cri.go:89] found id: "14a525d33c88ae8ae19f0009b68e1520b1cdd5d20974311bcff6db1b1c2908fc"
	I1119 22:35:58.799072  300816 cri.go:89] found id: "f59c54535c6e9f1fafd864cdfcb9f69068ddbcb8bcbe984010638eb57651a952"
	I1119 22:35:58.799077  300816 cri.go:89] found id: "b8b0b895cad35a7684c40b66ff2825a9b3dbba9b4767d6570e136b7009a9a08b"
	I1119 22:35:58.799080  300816 cri.go:89] found id: "8c2ad58c26050333e61182f496a23c0232575fa2b6cc562669bfcfe38dac5cec"
	I1119 22:35:58.799084  300816 cri.go:89] found id: "b8661f41149ca707f6324ddb0a00c89afc3d7e90a18f14246ef246fcdd15cae8"
	I1119 22:35:58.799086  300816 cri.go:89] found id: "9ea6b371425c23c51f93f2430382d9425eb4c20205a212ba69de8647057e8a75"
	I1119 22:35:58.799089  300816 cri.go:89] found id: "315d176713f54c2fce1e9bd8c79d670c65d2d6d604b46b0d6811484175780e15"
	I1119 22:35:58.799092  300816 cri.go:89] found id: "ad7b6880b1efc4a16d5cf0cecbb8a520d6cbc6b98ff585507d7a21dc7f0b8140"
	I1119 22:35:58.799102  300816 cri.go:89] found id: "83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88"
	I1119 22:35:58.799105  300816 cri.go:89] found id: "e856544d8049aafde0d5307cb696e25dd798b39edfacee5f142c940e0173e7e9"
	I1119 22:35:58.799107  300816 cri.go:89] found id: ""
	I1119 22:35:58.799160  300816 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:35:58.810713  300816 retry.go:31] will retry after 313.736626ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:35:58Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:35:59.125014  300816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:35:59.137909  300816 pause.go:52] kubelet running: false
	I1119 22:35:59.137988  300816 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:35:59.283573  300816 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:35:59.283651  300816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:35:59.359607  300816 cri.go:89] found id: "507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b"
	I1119 22:35:59.359632  300816 cri.go:89] found id: "14a525d33c88ae8ae19f0009b68e1520b1cdd5d20974311bcff6db1b1c2908fc"
	I1119 22:35:59.359639  300816 cri.go:89] found id: "f59c54535c6e9f1fafd864cdfcb9f69068ddbcb8bcbe984010638eb57651a952"
	I1119 22:35:59.359643  300816 cri.go:89] found id: "b8b0b895cad35a7684c40b66ff2825a9b3dbba9b4767d6570e136b7009a9a08b"
	I1119 22:35:59.359647  300816 cri.go:89] found id: "8c2ad58c26050333e61182f496a23c0232575fa2b6cc562669bfcfe38dac5cec"
	I1119 22:35:59.359652  300816 cri.go:89] found id: "b8661f41149ca707f6324ddb0a00c89afc3d7e90a18f14246ef246fcdd15cae8"
	I1119 22:35:59.359655  300816 cri.go:89] found id: "9ea6b371425c23c51f93f2430382d9425eb4c20205a212ba69de8647057e8a75"
	I1119 22:35:59.359659  300816 cri.go:89] found id: "315d176713f54c2fce1e9bd8c79d670c65d2d6d604b46b0d6811484175780e15"
	I1119 22:35:59.359663  300816 cri.go:89] found id: "ad7b6880b1efc4a16d5cf0cecbb8a520d6cbc6b98ff585507d7a21dc7f0b8140"
	I1119 22:35:59.359678  300816 cri.go:89] found id: "83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88"
	I1119 22:35:59.359683  300816 cri.go:89] found id: "e856544d8049aafde0d5307cb696e25dd798b39edfacee5f142c940e0173e7e9"
	I1119 22:35:59.359687  300816 cri.go:89] found id: ""
	I1119 22:35:59.359733  300816 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:35:59.374479  300816 retry.go:31] will retry after 502.97363ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:35:59Z" level=error msg="open /run/runc: no such file or directory"
	I1119 22:35:59.877987  300816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:35:59.892017  300816 pause.go:52] kubelet running: false
	I1119 22:35:59.892074  300816 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1119 22:36:00.042926  300816 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1119 22:36:00.042999  300816 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1119 22:36:00.122147  300816 cri.go:89] found id: "507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b"
	I1119 22:36:00.122165  300816 cri.go:89] found id: "14a525d33c88ae8ae19f0009b68e1520b1cdd5d20974311bcff6db1b1c2908fc"
	I1119 22:36:00.122169  300816 cri.go:89] found id: "f59c54535c6e9f1fafd864cdfcb9f69068ddbcb8bcbe984010638eb57651a952"
	I1119 22:36:00.122172  300816 cri.go:89] found id: "b8b0b895cad35a7684c40b66ff2825a9b3dbba9b4767d6570e136b7009a9a08b"
	I1119 22:36:00.122174  300816 cri.go:89] found id: "8c2ad58c26050333e61182f496a23c0232575fa2b6cc562669bfcfe38dac5cec"
	I1119 22:36:00.122177  300816 cri.go:89] found id: "b8661f41149ca707f6324ddb0a00c89afc3d7e90a18f14246ef246fcdd15cae8"
	I1119 22:36:00.122180  300816 cri.go:89] found id: "9ea6b371425c23c51f93f2430382d9425eb4c20205a212ba69de8647057e8a75"
	I1119 22:36:00.122182  300816 cri.go:89] found id: "315d176713f54c2fce1e9bd8c79d670c65d2d6d604b46b0d6811484175780e15"
	I1119 22:36:00.122184  300816 cri.go:89] found id: "ad7b6880b1efc4a16d5cf0cecbb8a520d6cbc6b98ff585507d7a21dc7f0b8140"
	I1119 22:36:00.122189  300816 cri.go:89] found id: "83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88"
	I1119 22:36:00.122191  300816 cri.go:89] found id: "e856544d8049aafde0d5307cb696e25dd798b39edfacee5f142c940e0173e7e9"
	I1119 22:36:00.122194  300816 cri.go:89] found id: ""
	I1119 22:36:00.122228  300816 ssh_runner.go:195] Run: sudo runc list -f json
	I1119 22:36:00.136805  300816 out.go:203] 
	W1119 22:36:00.138588  300816 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:36:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T22:36:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1119 22:36:00.138609  300816 out.go:285] * 
	* 
	W1119 22:36:00.143512  300816 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1119 22:36:00.144652  300816 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-409987 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-409987
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-409987:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974",
	        "Created": "2025-11-19T22:33:29.234870853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283650,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:35:02.911398495Z",
	            "FinishedAt": "2025-11-19T22:35:01.995807152Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/hosts",
	        "LogPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974-json.log",
	        "Name": "/default-k8s-diff-port-409987",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-409987:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-409987",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974",
	                "LowerDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-409987",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-409987/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-409987",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-409987",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-409987",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1b67d229df208ea99adad258168dda5501997ab5354a3ae898287aebe803f451",
	            "SandboxKey": "/var/run/docker/netns/1b67d229df20",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-409987": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03e1882d811d99da2a01a21670ff1bc38787a9ad8aa320e4d377f6f9c0dda9f8",
	                    "EndpointID": "8f5141ac351066addc82fed28320f0463315306485364a26531cce3aa2eecb2a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c6:d2:ef:f5:de:c1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-409987",
	                        "1cd68db04c75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987: exit status 2 (361.08185ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-409987 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-409987 logs -n 25: (1.201731827s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-654834 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo systemctl cat docker --no-pager                                                                                                                │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo docker system info                                                                                                                             │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cri-dockerd --version                                                                                                                          │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo containerd config dump                                                                                                                         │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo crio config                                                                                                                                    │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p auto-654834                                                                                                                                                     │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ image   │ default-k8s-diff-port-409987 image list --format=json                                                                                                              │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ pause   │ -p default-k8s-diff-port-409987 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ start   │ -p custom-flannel-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-654834        │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:36:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:36:00.037451  301312 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:36:00.037722  301312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:36:00.037733  301312 out.go:374] Setting ErrFile to fd 2...
	I1119 22:36:00.037738  301312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:36:00.037935  301312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:36:00.038459  301312 out.go:368] Setting JSON to false
	I1119 22:36:00.039702  301312 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4708,"bootTime":1763587052,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:36:00.039784  301312 start.go:143] virtualization: kvm guest
	I1119 22:36:00.041949  301312 out.go:179] * [custom-flannel-654834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:36:00.043157  301312 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:36:00.043189  301312 notify.go:221] Checking for updates...
	I1119 22:36:00.045342  301312 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:36:00.046521  301312 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:36:00.047694  301312 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:36:00.048777  301312 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:36:00.049977  301312 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:36:00.051729  301312 config.go:182] Loaded profile config "calico-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:36:00.051894  301312 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:36:00.052029  301312 config.go:182] Loaded profile config "kindnet-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:36:00.052144  301312 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:36:00.078180  301312 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:36:00.078277  301312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:36:00.144344  301312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:36:00.133578943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:36:00.144451  301312 docker.go:319] overlay module found
	I1119 22:36:00.146395  301312 out.go:179] * Using the docker driver based on user configuration
	I1119 22:36:00.147520  301312 start.go:309] selected driver: docker
	I1119 22:36:00.147537  301312 start.go:930] validating driver "docker" against <nil>
	I1119 22:36:00.147553  301312 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:36:00.148181  301312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:36:00.213272  301312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:36:00.20243675 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:36:00.213444  301312 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:36:00.213657  301312 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:36:00.215619  301312 out.go:179] * Using Docker driver with root privileges
	I1119 22:36:00.216874  301312 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1119 22:36:00.216899  301312 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1119 22:36:00.216963  301312 start.go:353] cluster config:
	{Name:custom-flannel-654834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-654834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:36:00.218217  301312 out.go:179] * Starting "custom-flannel-654834" primary control-plane node in "custom-flannel-654834" cluster
	I1119 22:36:00.219246  301312 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:36:00.220884  301312 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:36:00.221960  301312 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:36:00.221987  301312 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:36:00.222001  301312 cache.go:65] Caching tarball of preloaded images
	I1119 22:36:00.222051  301312 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:36:00.222065  301312 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:36:00.222073  301312 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:36:00.222146  301312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/custom-flannel-654834/config.json ...
	I1119 22:36:00.222166  301312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/custom-flannel-654834/config.json: {Name:mkd3a2c959e64aab18357e631b3b7616d866c06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:36:00.245201  301312 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:36:00.245225  301312 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:36:00.245242  301312 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:36:00.245270  301312 start.go:360] acquireMachinesLock for custom-flannel-654834: {Name:mkb9cca5f03a70c1f9dce9a5b2ea22f442dd82d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:36:00.245372  301312 start.go:364] duration metric: took 82.796µs to acquireMachinesLock for "custom-flannel-654834"
	I1119 22:36:00.245402  301312 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-654834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-654834 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:36:00.245494  301312 start.go:125] createHost starting for "" (driver="docker")
	W1119 22:35:56.273760  288919 node_ready.go:57] node "kindnet-654834" has "Ready":"False" status (will retry)
	W1119 22:35:58.773542  288919 node_ready.go:57] node "kindnet-654834" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 19 22:35:24 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:24.622949209Z" level=info msg="Started container" PID=1778 containerID=ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper id=f70ff119-4785-4909-8f96-436d9066b76f name=/runtime.v1.RuntimeService/StartContainer sandboxID=f192f44d24b5fa25a5e510ff4078c05fc4835373ca0abb98715a36f967055ed5
	Nov 19 22:35:25 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:25.58034437Z" level=info msg="Removing container: 7a29b930eecd6e8b867196be1348911dd7bb4690a4046d0e21f44d38169a8938" id=e25fed82-a4cd-4498-b82b-caef728a6588 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:35:25 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:25.851863849Z" level=info msg="Removed container 7a29b930eecd6e8b867196be1348911dd7bb4690a4046d0e21f44d38169a8938: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper" id=e25fed82-a4cd-4498-b82b-caef728a6588 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.500719547Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=79717580-5fa4-4166-886e-e2a324c016fe name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.501808505Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7146041a-efb7-4c60-991e-6befe23f0085 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.502932285Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper" id=979abf83-2e2f-4497-ac34-91e72e46ec1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.503081604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.517058819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.51777199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.552566637Z" level=info msg="Created container 83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper" id=979abf83-2e2f-4497-ac34-91e72e46ec1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.553285062Z" level=info msg="Starting container: 83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88" id=2130870b-9c95-4699-88fe-ac3b47034370 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.555329602Z" level=info msg="Started container" PID=1788 containerID=83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper id=2130870b-9c95-4699-88fe-ac3b47034370 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f192f44d24b5fa25a5e510ff4078c05fc4835373ca0abb98715a36f967055ed5
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.629430444Z" level=info msg="Removing container: ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5" id=8e04a48a-6f21-436a-9481-76acf43c5730 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.64014373Z" level=info msg="Removed container ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper" id=8e04a48a-6f21-436a-9481-76acf43c5730 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.633127423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=56f89188-aef6-4af7-9c9a-22d5972c430e name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.634070862Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ce0c5c6-4cf1-4354-b482-851c6fd13b1f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.635269565Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b1545536-5b90-4716-89f8-ced72a44fb16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.635427127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.639654773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.639870503Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/147865eacddcb1741aa6f21575faacb6ef892a687461dd0d5ba71f3153676f61/merged/etc/passwd: no such file or directory"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.639907663Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/147865eacddcb1741aa6f21575faacb6ef892a687461dd0d5ba71f3153676f61/merged/etc/group: no such file or directory"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.6402066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.672080339Z" level=info msg="Created container 507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b: kube-system/storage-provisioner/storage-provisioner" id=b1545536-5b90-4716-89f8-ced72a44fb16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.672953762Z" level=info msg="Starting container: 507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b" id=ae8ee403-d5a1-4b32-a008-359b1d091660 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.675211147Z" level=info msg="Started container" PID=1802 containerID=507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b description=kube-system/storage-provisioner/storage-provisioner id=ae8ee403-d5a1-4b32-a008-359b1d091660 name=/runtime.v1.RuntimeService/StartContainer sandboxID=65e079677bdcd480c44deb83bfd437cc7ff378ef674654d45803690bf178a828
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	507666fa3cc85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   65e079677bdcd       storage-provisioner                                    kube-system
	83f7fbb5ecf35       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   f192f44d24b5f       dashboard-metrics-scraper-6ffb444bf9-qmqmj             kubernetes-dashboard
	e856544d8049a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   7a28f02c419cc       kubernetes-dashboard-855c9754f9-dcs8c                  kubernetes-dashboard
	14a525d33c88a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   34a8d51cd8522       coredns-66bc5c9577-jv7mb                               kube-system
	eedaa070bdab3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   3517cc1ae82fb       busybox                                                default
	f59c54535c6e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   65e079677bdcd       storage-provisioner                                    kube-system
	b8b0b895cad35       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   f001c0faa89c3       kindnet-8ks5v                                          kube-system
	8c2ad58c26050       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   0a96177c7f805       kube-proxy-ph6ff                                       kube-system
	b8661f41149ca       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   b2ddb42dec525       kube-controller-manager-default-k8s-diff-port-409987   kube-system
	9ea6b371425c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   09fb5f68e9c09       kube-scheduler-default-k8s-diff-port-409987            kube-system
	315d176713f54       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   08e3a307fe7d8       etcd-default-k8s-diff-port-409987                      kube-system
	ad7b6880b1efc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   ec2ce59d5744a       kube-apiserver-default-k8s-diff-port-409987            kube-system
	
	
	==> coredns [14a525d33c88ae8ae19f0009b68e1520b1cdd5d20974311bcff6db1b1c2908fc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45103 - 7976 "HINFO IN 7147068076219367419.7391803195841887803. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.105481176s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-409987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-409987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-409987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_33_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-409987
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:35:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:35:42 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:35:42 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:35:42 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:35:42 +0000   Wed, 19 Nov 2025 22:34:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-409987
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                d18d242d-a2ed-4a8e-863e-f45978b5a25d
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 coredns-66bc5c9577-jv7mb                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m9s
	  kube-system                 etcd-default-k8s-diff-port-409987                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m15s
	  kube-system                 kindnet-8ks5v                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m10s
	  kube-system                 kube-apiserver-default-k8s-diff-port-409987             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-409987    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-proxy-ph6ff                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-scheduler-default-k8s-diff-port-409987             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qmqmj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dcs8c                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m8s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m15s              kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s              kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s              kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m11s              node-controller  Node default-k8s-diff-port-409987 event: Registered Node default-k8s-diff-port-409987 in Controller
	  Normal  NodeReady                89s                kubelet          Node default-k8s-diff-port-409987 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node default-k8s-diff-port-409987 event: Registered Node default-k8s-diff-port-409987 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [315d176713f54c2fce1e9bd8c79d670c65d2d6d604b46b0d6811484175780e15] <==
	{"level":"warn","ts":"2025-11-19T22:35:11.327718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.333637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.343907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.349978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.355547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.361172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.367528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.374598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.381150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.388710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.395983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.402977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.409562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.415589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.435247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.452801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.514682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:35:25.823595Z","caller":"traceutil/trace.go:172","msg":"trace[254352787] linearizableReadLoop","detail":"{readStateIndex:636; appliedIndex:636; }","duration":"132.84035ms","start":"2025-11-19T22:35:25.690727Z","end":"2025-11-19T22:35:25.823567Z","steps":["trace[254352787] 'read index received'  (duration: 132.831381ms)","trace[254352787] 'applied index is now lower than readState.Index'  (duration: 7.78µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:35:25.823987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.231381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-jv7mb\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-19T22:35:25.824059Z","caller":"traceutil/trace.go:172","msg":"trace[1109776083] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-jv7mb; range_end:; response_count:1; response_revision:599; }","duration":"133.328956ms","start":"2025-11-19T22:35:25.690718Z","end":"2025-11-19T22:35:25.824047Z","steps":["trace[1109776083] 'agreement among raft nodes before linearized reading'  (duration: 132.946552ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:35:25.824095Z","caller":"traceutil/trace.go:172","msg":"trace[734553560] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"238.24509ms","start":"2025-11-19T22:35:25.585835Z","end":"2025-11-19T22:35:25.824080Z","steps":["trace[734553560] 'process raft request'  (duration: 237.820738ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:35:26.696767Z","caller":"traceutil/trace.go:172","msg":"trace[1541713702] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"110.626699ms","start":"2025-11-19T22:35:26.586123Z","end":"2025-11-19T22:35:26.696750Z","steps":["trace[1541713702] 'process raft request'  (duration: 110.514452ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:35:26.778278Z","caller":"traceutil/trace.go:172","msg":"trace[1604313706] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"187.782655ms","start":"2025-11-19T22:35:26.590476Z","end":"2025-11-19T22:35:26.778259Z","steps":["trace[1604313706] 'process raft request'  (duration: 180.231216ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:35:33.446505Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.027512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2025-11-19T22:35:33.446567Z","caller":"traceutil/trace.go:172","msg":"trace[870799843] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:608; }","duration":"126.102989ms","start":"2025-11-19T22:35:33.320450Z","end":"2025-11-19T22:35:33.446553Z","steps":["trace[870799843] 'range keys from in-memory index tree'  (duration: 125.878549ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:36:01 up  1:18,  0 user,  load average: 3.19, 2.83, 1.99
	Linux default-k8s-diff-port-409987 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8b0b895cad35a7684c40b66ff2825a9b3dbba9b4767d6570e136b7009a9a08b] <==
	I1119 22:35:13.028484       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:35:13.028771       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:35:13.028967       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:35:13.029050       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:35:13.029083       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:35:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:35:13.231748       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:35:13.231779       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:35:13.231789       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:35:13.231942       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:35:13.622300       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:35:13.622345       1 metrics.go:72] Registering metrics
	I1119 22:35:13.622477       1 controller.go:711] "Syncing nftables rules"
	I1119 22:35:23.231916       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:35:23.232001       1 main.go:301] handling current node
	I1119 22:35:33.234890       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:35:33.234945       1 main.go:301] handling current node
	I1119 22:35:43.232547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:35:43.232593       1 main.go:301] handling current node
	I1119 22:35:53.235933       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:35:53.235985       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ad7b6880b1efc4a16d5cf0cecbb8a520d6cbc6b98ff585507d7a21dc7f0b8140] <==
	I1119 22:35:12.084136       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:35:12.083459       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:35:12.085889       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 22:35:12.086441       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 22:35:12.086458       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 22:35:12.094930       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:35:12.106340       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:35:12.107760       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:35:12.107825       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:35:12.110244       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:35:12.157890       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:35:12.164339       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 22:35:12.164456       1 policy_source.go:240] refreshing policies
	I1119 22:35:12.169883       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:35:12.426809       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:35:12.451640       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:35:12.466618       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:35:12.475347       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:35:12.482118       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:35:12.518664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.173.1"}
	I1119 22:35:12.538370       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.202.101"}
	I1119 22:35:13.010419       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:35:15.454395       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:35:15.802089       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:35:16.003062       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b8661f41149ca707f6324ddb0a00c89afc3d7e90a18f14246ef246fcdd15cae8] <==
	I1119 22:35:15.449024       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:35:15.449054       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:35:15.449061       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:35:15.449128       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:35:15.449136       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:35:15.449159       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:35:15.449201       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:35:15.449291       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:35:15.449294       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-409987"
	I1119 22:35:15.449341       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:35:15.450028       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:35:15.454910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:35:15.465882       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:35:15.465917       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:35:15.471214       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:35:15.471232       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:35:15.471242       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:35:15.473270       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:35:15.477162       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:35:15.479393       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:35:15.481288       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:35:15.483479       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:35:15.485885       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:35:15.491035       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:35:15.510667       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8c2ad58c26050333e61182f496a23c0232575fa2b6cc562669bfcfe38dac5cec] <==
	I1119 22:35:12.899332       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:35:12.969795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:35:13.070695       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:35:13.070740       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:35:13.070863       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:35:13.096162       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:35:13.096220       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:35:13.103025       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:35:13.103318       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:35:13.103346       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:35:13.104925       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:35:13.104949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:35:13.105049       1 config.go:200] "Starting service config controller"
	I1119 22:35:13.105062       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:35:13.105097       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:35:13.105103       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:35:13.105567       1 config.go:309] "Starting node config controller"
	I1119 22:35:13.105583       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:35:13.105590       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:35:13.206011       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:35:13.206037       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:35:13.206046       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9ea6b371425c23c51f93f2430382d9425eb4c20205a212ba69de8647057e8a75] <==
	I1119 22:35:10.347335       1 serving.go:386] Generated self-signed cert in-memory
	W1119 22:35:12.020575       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 22:35:12.020607       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:35:12.020633       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 22:35:12.020642       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 22:35:12.090355       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:35:12.090391       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:35:12.096357       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:35:12.096452       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:35:12.096807       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:35:12.096860       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:35:12.197467       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:35:14 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:14.719471     747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 22:35:16 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:16.176326     747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzp2r\" (UniqueName: \"kubernetes.io/projected/731791a6-3fa2-4329-9563-847063f17875-kube-api-access-rzp2r\") pod \"kubernetes-dashboard-855c9754f9-dcs8c\" (UID: \"731791a6-3fa2-4329-9563-847063f17875\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dcs8c"
	Nov 19 22:35:16 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:16.176603     747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/731791a6-3fa2-4329-9563-847063f17875-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dcs8c\" (UID: \"731791a6-3fa2-4329-9563-847063f17875\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dcs8c"
	Nov 19 22:35:16 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:16.176658     747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6526810-a5d4-469f-b17a-387ea66cbf97-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qmqmj\" (UID: \"d6526810-a5d4-469f-b17a-387ea66cbf97\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj"
	Nov 19 22:35:16 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:16.176690     747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4g9n\" (UniqueName: \"kubernetes.io/projected/d6526810-a5d4-469f-b17a-387ea66cbf97-kube-api-access-k4g9n\") pod \"dashboard-metrics-scraper-6ffb444bf9-qmqmj\" (UID: \"d6526810-a5d4-469f-b17a-387ea66cbf97\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj"
	Nov 19 22:35:22 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:22.583301     747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dcs8c" podStartSLOduration=1.36648315 podStartE2EDuration="6.583279334s" podCreationTimestamp="2025-11-19 22:35:16 +0000 UTC" firstStartedPulling="2025-11-19 22:35:16.416987721 +0000 UTC m=+7.011208822" lastFinishedPulling="2025-11-19 22:35:21.633783917 +0000 UTC m=+12.228005006" observedRunningTime="2025-11-19 22:35:22.58315081 +0000 UTC m=+13.177371917" watchObservedRunningTime="2025-11-19 22:35:22.583279334 +0000 UTC m=+13.177500443"
	Nov 19 22:35:24 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:24.575432     747 scope.go:117] "RemoveContainer" containerID="7a29b930eecd6e8b867196be1348911dd7bb4690a4046d0e21f44d38169a8938"
	Nov 19 22:35:25 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:25.579026     747 scope.go:117] "RemoveContainer" containerID="7a29b930eecd6e8b867196be1348911dd7bb4690a4046d0e21f44d38169a8938"
	Nov 19 22:35:25 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:25.579192     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:25 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:25.579403     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:26 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:26.583117     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:26 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:26.583270     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:28 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:28.341563     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:28 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:28.341798     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:42 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:42.500162     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:42 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:42.628034     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:42 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:42.628286     747 scope.go:117] "RemoveContainer" containerID="83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88"
	Nov 19 22:35:42 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:42.628487     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:43 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:43.632725     747 scope.go:117] "RemoveContainer" containerID="f59c54535c6e9f1fafd864cdfcb9f69068ddbcb8bcbe984010638eb57651a952"
	Nov 19 22:35:48 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:48.341987     747 scope.go:117] "RemoveContainer" containerID="83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88"
	Nov 19 22:35:48 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:48.342255     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:58 default-k8s-diff-port-409987 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:35:58 default-k8s-diff-port-409987 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:35:58 default-k8s-diff-port-409987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:35:58 default-k8s-diff-port-409987 systemd[1]: kubelet.service: Consumed 1.581s CPU time.
	
	
	==> kubernetes-dashboard [e856544d8049aafde0d5307cb696e25dd798b39edfacee5f142c940e0173e7e9] <==
	2025/11/19 22:35:21 Starting overwatch
	2025/11/19 22:35:21 Using namespace: kubernetes-dashboard
	2025/11/19 22:35:21 Using in-cluster config to connect to apiserver
	2025/11/19 22:35:21 Using secret token for csrf signing
	2025/11/19 22:35:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:35:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:35:21 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 22:35:21 Generating JWE encryption key
	2025/11/19 22:35:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:35:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:35:21 Initializing JWE encryption key from synchronized object
	2025/11/19 22:35:21 Creating in-cluster Sidecar client
	2025/11/19 22:35:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:35:21 Serving insecurely on HTTP port: 9090
	2025/11/19 22:35:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b] <==
	I1119 22:35:43.690522       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:35:43.700515       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:35:43.700587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:35:43.702675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:47.158295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:51.418833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:55.018399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:58.072339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:01.095546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:01.101722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:36:01.101882       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:36:01.101987       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"238e7196-679c-4ac3-8e69-1a8c292573ac", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-409987_fb94d7bf-9677-4765-85d9-af6755306adc became leader
	I1119 22:36:01.102078       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409987_fb94d7bf-9677-4765-85d9-af6755306adc!
	W1119 22:36:01.104109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:01.107397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:36:01.203150       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409987_fb94d7bf-9677-4765-85d9-af6755306adc!
	
	
	==> storage-provisioner [f59c54535c6e9f1fafd864cdfcb9f69068ddbcb8bcbe984010638eb57651a952] <==
	I1119 22:35:12.866010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:35:42.867674       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987: exit status 2 (436.78817ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-409987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-409987
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-409987:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974",
	        "Created": "2025-11-19T22:33:29.234870853Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283650,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-19T22:35:02.911398495Z",
	            "FinishedAt": "2025-11-19T22:35:01.995807152Z"
	        },
	        "Image": "sha256:da868a89527ea3b5fe65ed3ef232d132379e38c55dd4637db2e5af21a1522b2d",
	        "ResolvConfPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/hosts",
	        "LogPath": "/var/lib/docker/containers/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974/1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974-json.log",
	        "Name": "/default-k8s-diff-port-409987",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-409987:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-409987",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1cd68db04c75db1df58fdfe83b55bfe91011adbb4eeec0678e77c9caa243b974",
	                "LowerDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12-init/diff:/var/lib/docker/overlay2/c41f1595c388ad43d4eee36b4b1c501918a9630ba915eb36eae3b6bac2697b01/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cdfcc6cb40f56d4dd09b547a7f30de2e1186774b31b4b49f04fe4c188a295a12/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-409987",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-409987/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-409987",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-409987",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-409987",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "1b67d229df208ea99adad258168dda5501997ab5354a3ae898287aebe803f451",
	            "SandboxKey": "/var/run/docker/netns/1b67d229df20",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-409987": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "03e1882d811d99da2a01a21670ff1bc38787a9ad8aa320e4d377f6f9c0dda9f8",
	                    "EndpointID": "8f5141ac351066addc82fed28320f0463315306485364a26531cce3aa2eecb2a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "c6:d2:ef:f5:de:c1",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-409987",
	                        "1cd68db04c75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987: exit status 2 (364.857949ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-409987 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-409987 logs -n 25: (3.744092697s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-654834 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo systemctl cat docker --no-pager                                                                                                                │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo docker system info                                                                                                                             │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cri-dockerd --version                                                                                                                          │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ ssh     │ -p auto-654834 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo containerd config dump                                                                                                                         │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ ssh     │ -p auto-654834 sudo crio config                                                                                                                                    │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ delete  │ -p auto-654834                                                                                                                                                     │ auto-654834                  │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ image   │ default-k8s-diff-port-409987 image list --format=json                                                                                                              │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │ 19 Nov 25 22:35 UTC │
	│ pause   │ -p default-k8s-diff-port-409987 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-409987 │ jenkins │ v1.37.0 │ 19 Nov 25 22:35 UTC │                     │
	│ start   │ -p custom-flannel-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-654834        │ jenkins │ v1.37.0 │ 19 Nov 25 22:36 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 22:36:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 22:36:00.037451  301312 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:36:00.037722  301312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:36:00.037733  301312 out.go:374] Setting ErrFile to fd 2...
	I1119 22:36:00.037738  301312 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:36:00.037935  301312 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:36:00.038459  301312 out.go:368] Setting JSON to false
	I1119 22:36:00.039702  301312 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4708,"bootTime":1763587052,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:36:00.039784  301312 start.go:143] virtualization: kvm guest
	I1119 22:36:00.041949  301312 out.go:179] * [custom-flannel-654834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:36:00.043157  301312 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:36:00.043189  301312 notify.go:221] Checking for updates...
	I1119 22:36:00.045342  301312 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:36:00.046521  301312 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:36:00.047694  301312 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:36:00.048777  301312 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:36:00.049977  301312 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:36:00.051729  301312 config.go:182] Loaded profile config "calico-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:36:00.051894  301312 config.go:182] Loaded profile config "default-k8s-diff-port-409987": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:36:00.052029  301312 config.go:182] Loaded profile config "kindnet-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:36:00.052144  301312 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:36:00.078180  301312 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:36:00.078277  301312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:36:00.144344  301312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:36:00.133578943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:36:00.144451  301312 docker.go:319] overlay module found
	I1119 22:36:00.146395  301312 out.go:179] * Using the docker driver based on user configuration
	I1119 22:36:00.147520  301312 start.go:309] selected driver: docker
	I1119 22:36:00.147537  301312 start.go:930] validating driver "docker" against <nil>
	I1119 22:36:00.147553  301312 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:36:00.148181  301312 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:36:00.213272  301312 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:36:00.20243675 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:36:00.213444  301312 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 22:36:00.213657  301312 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1119 22:36:00.215619  301312 out.go:179] * Using Docker driver with root privileges
	I1119 22:36:00.216874  301312 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1119 22:36:00.216899  301312 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1119 22:36:00.216963  301312 start.go:353] cluster config:
	{Name:custom-flannel-654834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-654834 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 22:36:00.218217  301312 out.go:179] * Starting "custom-flannel-654834" primary control-plane node in "custom-flannel-654834" cluster
	I1119 22:36:00.219246  301312 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 22:36:00.220884  301312 out.go:179] * Pulling base image v0.0.48-1763561786-21918 ...
	I1119 22:36:00.221960  301312 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1119 22:36:00.221987  301312 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1119 22:36:00.222001  301312 cache.go:65] Caching tarball of preloaded images
	I1119 22:36:00.222051  301312 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 22:36:00.222065  301312 preload.go:238] Found /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1119 22:36:00.222073  301312 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1119 22:36:00.222146  301312 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/custom-flannel-654834/config.json ...
	I1119 22:36:00.222166  301312 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/custom-flannel-654834/config.json: {Name:mkd3a2c959e64aab18357e631b3b7616d866c06c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:36:00.245201  301312 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon, skipping pull
	I1119 22:36:00.245225  301312 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 exists in daemon, skipping load
	I1119 22:36:00.245242  301312 cache.go:243] Successfully downloaded all kic artifacts
	I1119 22:36:00.245270  301312 start.go:360] acquireMachinesLock for custom-flannel-654834: {Name:mkb9cca5f03a70c1f9dce9a5b2ea22f442dd82d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1119 22:36:00.245372  301312 start.go:364] duration metric: took 82.796µs to acquireMachinesLock for "custom-flannel-654834"
	I1119 22:36:00.245402  301312 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-654834 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-654834 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:36:00.245494  301312 start.go:125] createHost starting for "" (driver="docker")
	W1119 22:35:56.273760  288919 node_ready.go:57] node "kindnet-654834" has "Ready":"False" status (will retry)
	W1119 22:35:58.773542  288919 node_ready.go:57] node "kindnet-654834" has "Ready":"False" status (will retry)
	I1119 22:35:59.342172  293392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:35:59.842984  293392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:00.343060  293392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:00.843091  293392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:01.342746  293392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:01.843075  293392 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1119 22:36:01.944641  293392 kubeadm.go:1114] duration metric: took 4.197686044s to wait for elevateKubeSystemPrivileges
	I1119 22:36:01.944678  293392 kubeadm.go:403] duration metric: took 16.817992538s to StartCluster
	I1119 22:36:01.944697  293392 settings.go:142] acquiring lock: {Name:mka9fe6de5428a936981d9c3d0340faa5cc1f060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:36:01.944767  293392 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:36:01.946544  293392 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/kubeconfig: {Name:mk73c5a788ba2c6cd33010a8266a7a70ebebfa0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 22:36:01.946790  293392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1119 22:36:01.946801  293392 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1119 22:36:01.946903  293392 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1119 22:36:01.946991  293392 addons.go:70] Setting storage-provisioner=true in profile "calico-654834"
	I1119 22:36:01.947004  293392 addons.go:70] Setting default-storageclass=true in profile "calico-654834"
	I1119 22:36:01.947015  293392 config.go:182] Loaded profile config "calico-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:36:01.947024  293392 addons.go:239] Setting addon storage-provisioner=true in "calico-654834"
	I1119 22:36:01.947028  293392 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-654834"
	I1119 22:36:01.947053  293392 host.go:66] Checking if "calico-654834" exists ...
	I1119 22:36:01.947517  293392 cli_runner.go:164] Run: docker container inspect calico-654834 --format={{.State.Status}}
	I1119 22:36:01.947625  293392 cli_runner.go:164] Run: docker container inspect calico-654834 --format={{.State.Status}}
	I1119 22:36:01.950993  293392 out.go:179] * Verifying Kubernetes components...
	I1119 22:36:01.952455  293392 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1119 22:36:01.976170  293392 addons.go:239] Setting addon default-storageclass=true in "calico-654834"
	I1119 22:36:01.976219  293392 host.go:66] Checking if "calico-654834" exists ...
	I1119 22:36:01.976654  293392 cli_runner.go:164] Run: docker container inspect calico-654834 --format={{.State.Status}}
	I1119 22:36:01.979110  293392 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1119 22:36:01.980539  293392 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:36:01.980560  293392 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1119 22:36:01.980612  293392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-654834
	I1119 22:36:02.006924  293392 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1119 22:36:02.006951  293392 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1119 22:36:02.007012  293392 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-654834
	I1119 22:36:02.008842  293392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/calico-654834/id_rsa Username:docker}
	I1119 22:36:02.033339  293392 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/calico-654834/id_rsa Username:docker}
	I1119 22:36:02.066492  293392 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1119 22:36:02.125765  293392 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1119 22:36:02.138297  293392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1119 22:36:02.168953  293392 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1119 22:36:02.320639  293392 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1119 22:36:02.326187  293392 node_ready.go:35] waiting up to 15m0s for node "calico-654834" to be "Ready" ...
	I1119 22:36:02.521295  293392 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Nov 19 22:35:24 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:24.622949209Z" level=info msg="Started container" PID=1778 containerID=ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper id=f70ff119-4785-4909-8f96-436d9066b76f name=/runtime.v1.RuntimeService/StartContainer sandboxID=f192f44d24b5fa25a5e510ff4078c05fc4835373ca0abb98715a36f967055ed5
	Nov 19 22:35:25 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:25.58034437Z" level=info msg="Removing container: 7a29b930eecd6e8b867196be1348911dd7bb4690a4046d0e21f44d38169a8938" id=e25fed82-a4cd-4498-b82b-caef728a6588 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:35:25 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:25.851863849Z" level=info msg="Removed container 7a29b930eecd6e8b867196be1348911dd7bb4690a4046d0e21f44d38169a8938: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper" id=e25fed82-a4cd-4498-b82b-caef728a6588 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.500719547Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=79717580-5fa4-4166-886e-e2a324c016fe name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.501808505Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7146041a-efb7-4c60-991e-6befe23f0085 name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.502932285Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper" id=979abf83-2e2f-4497-ac34-91e72e46ec1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.503081604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.517058819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.51777199Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.552566637Z" level=info msg="Created container 83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper" id=979abf83-2e2f-4497-ac34-91e72e46ec1d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.553285062Z" level=info msg="Starting container: 83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88" id=2130870b-9c95-4699-88fe-ac3b47034370 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.555329602Z" level=info msg="Started container" PID=1788 containerID=83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper id=2130870b-9c95-4699-88fe-ac3b47034370 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f192f44d24b5fa25a5e510ff4078c05fc4835373ca0abb98715a36f967055ed5
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.629430444Z" level=info msg="Removing container: ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5" id=8e04a48a-6f21-436a-9481-76acf43c5730 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:35:42 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:42.64014373Z" level=info msg="Removed container ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj/dashboard-metrics-scraper" id=8e04a48a-6f21-436a-9481-76acf43c5730 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.633127423Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=56f89188-aef6-4af7-9c9a-22d5972c430e name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.634070862Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ce0c5c6-4cf1-4354-b482-851c6fd13b1f name=/runtime.v1.ImageService/ImageStatus
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.635269565Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b1545536-5b90-4716-89f8-ced72a44fb16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.635427127Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.639654773Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.639870503Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/147865eacddcb1741aa6f21575faacb6ef892a687461dd0d5ba71f3153676f61/merged/etc/passwd: no such file or directory"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.639907663Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/147865eacddcb1741aa6f21575faacb6ef892a687461dd0d5ba71f3153676f61/merged/etc/group: no such file or directory"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.6402066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.672080339Z" level=info msg="Created container 507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b: kube-system/storage-provisioner/storage-provisioner" id=b1545536-5b90-4716-89f8-ced72a44fb16 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.672953762Z" level=info msg="Starting container: 507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b" id=ae8ee403-d5a1-4b32-a008-359b1d091660 name=/runtime.v1.RuntimeService/StartContainer
	Nov 19 22:35:43 default-k8s-diff-port-409987 crio[580]: time="2025-11-19T22:35:43.675211147Z" level=info msg="Started container" PID=1802 containerID=507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b description=kube-system/storage-provisioner/storage-provisioner id=ae8ee403-d5a1-4b32-a008-359b1d091660 name=/runtime.v1.RuntimeService/StartContainer sandboxID=65e079677bdcd480c44deb83bfd437cc7ff378ef674654d45803690bf178a828
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	507666fa3cc85       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   65e079677bdcd       storage-provisioner                                    kube-system
	83f7fbb5ecf35       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   f192f44d24b5f       dashboard-metrics-scraper-6ffb444bf9-qmqmj             kubernetes-dashboard
	e856544d8049a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   7a28f02c419cc       kubernetes-dashboard-855c9754f9-dcs8c                  kubernetes-dashboard
	14a525d33c88a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   34a8d51cd8522       coredns-66bc5c9577-jv7mb                               kube-system
	eedaa070bdab3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   3517cc1ae82fb       busybox                                                default
	f59c54535c6e9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   65e079677bdcd       storage-provisioner                                    kube-system
	b8b0b895cad35       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   f001c0faa89c3       kindnet-8ks5v                                          kube-system
	8c2ad58c26050       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   0a96177c7f805       kube-proxy-ph6ff                                       kube-system
	b8661f41149ca       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   b2ddb42dec525       kube-controller-manager-default-k8s-diff-port-409987   kube-system
	9ea6b371425c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   09fb5f68e9c09       kube-scheduler-default-k8s-diff-port-409987            kube-system
	315d176713f54       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   08e3a307fe7d8       etcd-default-k8s-diff-port-409987                      kube-system
	ad7b6880b1efc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   ec2ce59d5744a       kube-apiserver-default-k8s-diff-port-409987            kube-system
	
	
	==> coredns [14a525d33c88ae8ae19f0009b68e1520b1cdd5d20974311bcff6db1b1c2908fc] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45103 - 7976 "HINFO IN 7147068076219367419.7391803195841887803. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.105481176s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-409987
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-409987
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=08454a179ffa60c8ae500105aac58654b5cdef58
	                    minikube.k8s.io/name=default-k8s-diff-port-409987
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_19T22_33_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Nov 2025 22:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-409987
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Nov 2025 22:35:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Nov 2025 22:35:42 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Nov 2025 22:35:42 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Nov 2025 22:35:42 +0000   Wed, 19 Nov 2025 22:33:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Nov 2025 22:35:42 +0000   Wed, 19 Nov 2025 22:34:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-409987
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a4f1f11dedb3fb2ad8898bb691dcfbb
	  System UUID:                d18d242d-a2ed-4a8e-863e-f45978b5a25d
	  Boot ID:                    2e6ef2ee-4f67-4d48-891c-e886dec6b09f
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-jv7mb                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m11s
	  kube-system                 etcd-default-k8s-diff-port-409987                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m17s
	  kube-system                 kindnet-8ks5v                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m12s
	  kube-system                 kube-apiserver-default-k8s-diff-port-409987             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-409987    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-ph6ff                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-default-k8s-diff-port-409987             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qmqmj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dcs8c                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m10s              kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m17s              kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s              kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s              kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m17s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m13s              node-controller  Node default-k8s-diff-port-409987 event: Registered Node default-k8s-diff-port-409987 in Controller
	  Normal  NodeReady                91s                kubelet          Node default-k8s-diff-port-409987 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-409987 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node default-k8s-diff-port-409987 event: Registered Node default-k8s-diff-port-409987 in Controller
	
	
	==> dmesg <==
	[  +0.081562] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.023699] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.202131] kauditd_printk_skb: 47 callbacks suppressed
	[Nov19 21:49] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.012566] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023882] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023970] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023788] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +1.023895] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[  +4.031562] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[Nov19 21:50] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +16.382369] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	[ +32.251669] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 5a 76 4b 27 f4 f4 76 36 f8 f2 37 db 08 00
	
	
	==> etcd [315d176713f54c2fce1e9bd8c79d670c65d2d6d604b46b0d6811484175780e15] <==
	{"level":"warn","ts":"2025-11-19T22:35:11.327718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.333637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.343907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.349978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.355547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.361172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.367528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.374598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.381150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.388710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.395983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.402977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.409562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.415589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.435247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.452801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-19T22:35:11.514682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53996","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-19T22:35:25.823595Z","caller":"traceutil/trace.go:172","msg":"trace[254352787] linearizableReadLoop","detail":"{readStateIndex:636; appliedIndex:636; }","duration":"132.84035ms","start":"2025-11-19T22:35:25.690727Z","end":"2025-11-19T22:35:25.823567Z","steps":["trace[254352787] 'read index received'  (duration: 132.831381ms)","trace[254352787] 'applied index is now lower than readState.Index'  (duration: 7.78µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-19T22:35:25.823987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.231381ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-jv7mb\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-19T22:35:25.824059Z","caller":"traceutil/trace.go:172","msg":"trace[1109776083] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-jv7mb; range_end:; response_count:1; response_revision:599; }","duration":"133.328956ms","start":"2025-11-19T22:35:25.690718Z","end":"2025-11-19T22:35:25.824047Z","steps":["trace[1109776083] 'agreement among raft nodes before linearized reading'  (duration: 132.946552ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:35:25.824095Z","caller":"traceutil/trace.go:172","msg":"trace[734553560] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"238.24509ms","start":"2025-11-19T22:35:25.585835Z","end":"2025-11-19T22:35:25.824080Z","steps":["trace[734553560] 'process raft request'  (duration: 237.820738ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:35:26.696767Z","caller":"traceutil/trace.go:172","msg":"trace[1541713702] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"110.626699ms","start":"2025-11-19T22:35:26.586123Z","end":"2025-11-19T22:35:26.696750Z","steps":["trace[1541713702] 'process raft request'  (duration: 110.514452ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-19T22:35:26.778278Z","caller":"traceutil/trace.go:172","msg":"trace[1604313706] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"187.782655ms","start":"2025-11-19T22:35:26.590476Z","end":"2025-11-19T22:35:26.778259Z","steps":["trace[1604313706] 'process raft request'  (duration: 180.231216ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-19T22:35:33.446505Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.027512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/default/kubernetes\" limit:1 ","response":"range_response_count:1 size:474"}
	{"level":"info","ts":"2025-11-19T22:35:33.446567Z","caller":"traceutil/trace.go:172","msg":"trace[870799843] range","detail":"{range_begin:/registry/endpointslices/default/kubernetes; range_end:; response_count:1; response_revision:608; }","duration":"126.102989ms","start":"2025-11-19T22:35:33.320450Z","end":"2025-11-19T22:35:33.446553Z","steps":["trace[870799843] 'range keys from in-memory index tree'  (duration: 125.878549ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:36:04 up  1:18,  0 user,  load average: 3.58, 2.92, 2.02
	Linux default-k8s-diff-port-409987 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b8b0b895cad35a7684c40b66ff2825a9b3dbba9b4767d6570e136b7009a9a08b] <==
	I1119 22:35:13.028484       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1119 22:35:13.028771       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1119 22:35:13.028967       1 main.go:148] setting mtu 1500 for CNI 
	I1119 22:35:13.029050       1 main.go:178] kindnetd IP family: "ipv4"
	I1119 22:35:13.029083       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-19T22:35:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1119 22:35:13.231748       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1119 22:35:13.231779       1 controller.go:381] "Waiting for informer caches to sync"
	I1119 22:35:13.231789       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1119 22:35:13.231942       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1119 22:35:13.622300       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1119 22:35:13.622345       1 metrics.go:72] Registering metrics
	I1119 22:35:13.622477       1 controller.go:711] "Syncing nftables rules"
	I1119 22:35:23.231916       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:35:23.232001       1 main.go:301] handling current node
	I1119 22:35:33.234890       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:35:33.234945       1 main.go:301] handling current node
	I1119 22:35:43.232547       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:35:43.232593       1 main.go:301] handling current node
	I1119 22:35:53.235933       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:35:53.235985       1 main.go:301] handling current node
	I1119 22:36:03.238130       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1119 22:36:03.238165       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ad7b6880b1efc4a16d5cf0cecbb8a520d6cbc6b98ff585507d7a21dc7f0b8140] <==
	I1119 22:35:12.084136       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1119 22:35:12.083459       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1119 22:35:12.085889       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1119 22:35:12.086441       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1119 22:35:12.086458       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1119 22:35:12.094930       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1119 22:35:12.106340       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1119 22:35:12.107760       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1119 22:35:12.107825       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1119 22:35:12.110244       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1119 22:35:12.157890       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1119 22:35:12.164339       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1119 22:35:12.164456       1 policy_source.go:240] refreshing policies
	I1119 22:35:12.169883       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1119 22:35:12.426809       1 controller.go:667] quota admission added evaluator for: namespaces
	I1119 22:35:12.451640       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1119 22:35:12.466618       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1119 22:35:12.475347       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1119 22:35:12.482118       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1119 22:35:12.518664       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.173.1"}
	I1119 22:35:12.538370       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.202.101"}
	I1119 22:35:13.010419       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1119 22:35:15.454395       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1119 22:35:15.802089       1 controller.go:667] quota admission added evaluator for: endpoints
	I1119 22:35:16.003062       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b8661f41149ca707f6324ddb0a00c89afc3d7e90a18f14246ef246fcdd15cae8] <==
	I1119 22:35:15.449024       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1119 22:35:15.449054       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1119 22:35:15.449061       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1119 22:35:15.449128       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1119 22:35:15.449136       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1119 22:35:15.449159       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1119 22:35:15.449201       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1119 22:35:15.449291       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1119 22:35:15.449294       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-409987"
	I1119 22:35:15.449341       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1119 22:35:15.450028       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1119 22:35:15.454910       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:35:15.465882       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1119 22:35:15.465917       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1119 22:35:15.471214       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1119 22:35:15.471232       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1119 22:35:15.471242       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1119 22:35:15.473270       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1119 22:35:15.477162       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1119 22:35:15.479393       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1119 22:35:15.481288       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1119 22:35:15.483479       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1119 22:35:15.485885       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1119 22:35:15.491035       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1119 22:35:15.510667       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [8c2ad58c26050333e61182f496a23c0232575fa2b6cc562669bfcfe38dac5cec] <==
	I1119 22:35:12.899332       1 server_linux.go:53] "Using iptables proxy"
	I1119 22:35:12.969795       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1119 22:35:13.070695       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1119 22:35:13.070740       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1119 22:35:13.070863       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1119 22:35:13.096162       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1119 22:35:13.096220       1 server_linux.go:132] "Using iptables Proxier"
	I1119 22:35:13.103025       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1119 22:35:13.103318       1 server.go:527] "Version info" version="v1.34.1"
	I1119 22:35:13.103346       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:35:13.104925       1 config.go:403] "Starting serviceCIDR config controller"
	I1119 22:35:13.104949       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1119 22:35:13.105049       1 config.go:200] "Starting service config controller"
	I1119 22:35:13.105062       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1119 22:35:13.105097       1 config.go:106] "Starting endpoint slice config controller"
	I1119 22:35:13.105103       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1119 22:35:13.105567       1 config.go:309] "Starting node config controller"
	I1119 22:35:13.105583       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1119 22:35:13.105590       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1119 22:35:13.206011       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1119 22:35:13.206037       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1119 22:35:13.206046       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9ea6b371425c23c51f93f2430382d9425eb4c20205a212ba69de8647057e8a75] <==
	I1119 22:35:10.347335       1 serving.go:386] Generated self-signed cert in-memory
	W1119 22:35:12.020575       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1119 22:35:12.020607       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1119 22:35:12.020633       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1119 22:35:12.020642       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1119 22:35:12.090355       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1119 22:35:12.090391       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1119 22:35:12.096357       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:35:12.096452       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1119 22:35:12.096807       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1119 22:35:12.096860       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1119 22:35:12.197467       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 19 22:35:14 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:14.719471     747 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 19 22:35:16 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:16.176326     747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzp2r\" (UniqueName: \"kubernetes.io/projected/731791a6-3fa2-4329-9563-847063f17875-kube-api-access-rzp2r\") pod \"kubernetes-dashboard-855c9754f9-dcs8c\" (UID: \"731791a6-3fa2-4329-9563-847063f17875\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dcs8c"
	Nov 19 22:35:16 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:16.176603     747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/731791a6-3fa2-4329-9563-847063f17875-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-dcs8c\" (UID: \"731791a6-3fa2-4329-9563-847063f17875\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dcs8c"
	Nov 19 22:35:16 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:16.176658     747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d6526810-a5d4-469f-b17a-387ea66cbf97-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-qmqmj\" (UID: \"d6526810-a5d4-469f-b17a-387ea66cbf97\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj"
	Nov 19 22:35:16 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:16.176690     747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4g9n\" (UniqueName: \"kubernetes.io/projected/d6526810-a5d4-469f-b17a-387ea66cbf97-kube-api-access-k4g9n\") pod \"dashboard-metrics-scraper-6ffb444bf9-qmqmj\" (UID: \"d6526810-a5d4-469f-b17a-387ea66cbf97\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj"
	Nov 19 22:35:22 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:22.583301     747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dcs8c" podStartSLOduration=1.36648315 podStartE2EDuration="6.583279334s" podCreationTimestamp="2025-11-19 22:35:16 +0000 UTC" firstStartedPulling="2025-11-19 22:35:16.416987721 +0000 UTC m=+7.011208822" lastFinishedPulling="2025-11-19 22:35:21.633783917 +0000 UTC m=+12.228005006" observedRunningTime="2025-11-19 22:35:22.58315081 +0000 UTC m=+13.177371917" watchObservedRunningTime="2025-11-19 22:35:22.583279334 +0000 UTC m=+13.177500443"
	Nov 19 22:35:24 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:24.575432     747 scope.go:117] "RemoveContainer" containerID="7a29b930eecd6e8b867196be1348911dd7bb4690a4046d0e21f44d38169a8938"
	Nov 19 22:35:25 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:25.579026     747 scope.go:117] "RemoveContainer" containerID="7a29b930eecd6e8b867196be1348911dd7bb4690a4046d0e21f44d38169a8938"
	Nov 19 22:35:25 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:25.579192     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:25 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:25.579403     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:26 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:26.583117     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:26 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:26.583270     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:28 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:28.341563     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:28 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:28.341798     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:42 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:42.500162     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:42 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:42.628034     747 scope.go:117] "RemoveContainer" containerID="ed9f09fb0b5644c8c9988cc0a7421cb8117a3249899fef2a222fef012d2452e5"
	Nov 19 22:35:42 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:42.628286     747 scope.go:117] "RemoveContainer" containerID="83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88"
	Nov 19 22:35:42 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:42.628487     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:43 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:43.632725     747 scope.go:117] "RemoveContainer" containerID="f59c54535c6e9f1fafd864cdfcb9f69068ddbcb8bcbe984010638eb57651a952"
	Nov 19 22:35:48 default-k8s-diff-port-409987 kubelet[747]: I1119 22:35:48.341987     747 scope.go:117] "RemoveContainer" containerID="83f7fbb5ecf35c56fb947b8da8966082d238d53948822b1fc5bd9d5798538f88"
	Nov 19 22:35:48 default-k8s-diff-port-409987 kubelet[747]: E1119 22:35:48.342255     747 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qmqmj_kubernetes-dashboard(d6526810-a5d4-469f-b17a-387ea66cbf97)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qmqmj" podUID="d6526810-a5d4-469f-b17a-387ea66cbf97"
	Nov 19 22:35:58 default-k8s-diff-port-409987 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 19 22:35:58 default-k8s-diff-port-409987 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 19 22:35:58 default-k8s-diff-port-409987 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 19 22:35:58 default-k8s-diff-port-409987 systemd[1]: kubelet.service: Consumed 1.581s CPU time.
	
	
	==> kubernetes-dashboard [e856544d8049aafde0d5307cb696e25dd798b39edfacee5f142c940e0173e7e9] <==
	2025/11/19 22:35:21 Using namespace: kubernetes-dashboard
	2025/11/19 22:35:21 Using in-cluster config to connect to apiserver
	2025/11/19 22:35:21 Using secret token for csrf signing
	2025/11/19 22:35:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/19 22:35:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/19 22:35:21 Successful initial request to the apiserver, version: v1.34.1
	2025/11/19 22:35:21 Generating JWE encryption key
	2025/11/19 22:35:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/19 22:35:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/19 22:35:21 Initializing JWE encryption key from synchronized object
	2025/11/19 22:35:21 Creating in-cluster Sidecar client
	2025/11/19 22:35:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:35:21 Serving insecurely on HTTP port: 9090
	2025/11/19 22:35:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/19 22:35:21 Starting overwatch
	
	
	==> storage-provisioner [507666fa3cc8548957588749adb1760d2f112c37d344b848ea532467dc639c4b] <==
	I1119 22:35:43.690522       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1119 22:35:43.700515       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1119 22:35:43.700587       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1119 22:35:43.702675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:47.158295       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:51.418833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:55.018399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:35:58.072339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:01.095546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:01.101722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:36:01.101882       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1119 22:36:01.101987       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"238e7196-679c-4ac3-8e69-1a8c292573ac", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-409987_fb94d7bf-9677-4765-85d9-af6755306adc became leader
	I1119 22:36:01.102078       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409987_fb94d7bf-9677-4765-85d9-af6755306adc!
	W1119 22:36:01.104109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:01.107397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1119 22:36:01.203150       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-409987_fb94d7bf-9677-4765-85d9-af6755306adc!
	W1119 22:36:03.111612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:03.117970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:05.122026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1119 22:36:05.219234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f59c54535c6e9f1fafd864cdfcb9f69068ddbcb8bcbe984010638eb57651a952] <==
	I1119 22:35:12.866010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1119 22:35:42.867674       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987: exit status 2 (334.849425ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-409987 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.51s)
E1119 22:37:23.144867   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (263/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.49
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 4.13
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.37
21 TestBinaryMirror 0.8
22 TestOffline 86.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 123.23
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 7.4
48 TestAddons/StoppedEnableDisable 18.52
49 TestCertOptions 27.41
50 TestCertExpiration 222.56
52 TestForceSystemdFlag 32.01
53 TestForceSystemdEnv 34.64
58 TestErrorSpam/setup 20.25
59 TestErrorSpam/start 0.62
60 TestErrorSpam/status 0.89
61 TestErrorSpam/pause 6.44
62 TestErrorSpam/unpause 5.47
63 TestErrorSpam/stop 2.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 40.29
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.97
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.79
75 TestFunctional/serial/CacheCmd/cache/add_local 1.12
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 46
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.09
86 TestFunctional/serial/LogsFileCmd 1.12
87 TestFunctional/serial/InvalidService 3.83
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 8.28
91 TestFunctional/parallel/DryRun 0.36
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 0.89
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 24.47
101 TestFunctional/parallel/SSHCmd 0.56
102 TestFunctional/parallel/CpCmd 1.8
103 TestFunctional/parallel/MySQL 16.65
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 1.81
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.47
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
117 TestFunctional/parallel/Version/short 0.08
118 TestFunctional/parallel/Version/components 0.55
120 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
121 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
122 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
123 TestFunctional/parallel/ImageCommands/ImageBuild 2.31
124 TestFunctional/parallel/ImageCommands/Setup 1.22
126 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.2
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
145 TestFunctional/parallel/ProfileCmd/profile_list 0.45
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
147 TestFunctional/parallel/MountCmd/any-port 5.42
148 TestFunctional/parallel/MountCmd/specific-port 1.59
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
150 TestFunctional/parallel/ServiceCmd/List 1.68
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 133.21
163 TestMultiControlPlane/serial/DeployApp 3.93
164 TestMultiControlPlane/serial/PingHostFromPods 0.97
165 TestMultiControlPlane/serial/AddWorkerNode 56.91
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
168 TestMultiControlPlane/serial/CopyFile 16.14
169 TestMultiControlPlane/serial/StopSecondaryNode 14.23
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.67
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.88
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 115.79
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.01
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
176 TestMultiControlPlane/serial/StopCluster 47.62
177 TestMultiControlPlane/serial/RestartCluster 53.97
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
179 TestMultiControlPlane/serial/AddSecondaryNode 75.28
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
185 TestJSONOutput/start/Command 66.46
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.91
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 28.64
211 TestKicCustomNetwork/use_default_bridge_network 22.55
212 TestKicExistingNetwork 23.89
213 TestKicCustomSubnet 22.48
214 TestKicStaticIP 26.92
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 46.42
219 TestMountStart/serial/StartWithMountFirst 4.97
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 4.67
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.64
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.23
226 TestMountStart/serial/RestartStopped 7.14
227 TestMountStart/serial/VerifyMountPostStop 0.25
230 TestMultiNode/serial/FreshStart2Nodes 92.77
231 TestMultiNode/serial/DeployApp2Nodes 3.09
232 TestMultiNode/serial/PingHostFrom2Pods 0.69
233 TestMultiNode/serial/AddNode 53.27
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.61
236 TestMultiNode/serial/CopyFile 9.23
237 TestMultiNode/serial/StopNode 2.19
238 TestMultiNode/serial/StartAfterStop 7.24
239 TestMultiNode/serial/RestartKeepsNodes 59.48
240 TestMultiNode/serial/DeleteNode 4.93
241 TestMultiNode/serial/StopMultiNode 30.73
242 TestMultiNode/serial/RestartMultiNode 46.85
243 TestMultiNode/serial/ValidateNameConflict 22.61
248 TestPreload 103.4
250 TestScheduledStopUnix 97
253 TestInsufficientStorage 9.27
254 TestRunningBinaryUpgrade 50.03
256 TestKubernetesUpgrade 305.21
257 TestMissingContainerUpgrade 73.59
259 TestPause/serial/Start 54.99
260 TestStoppedBinaryUpgrade/Setup 0.52
261 TestStoppedBinaryUpgrade/Upgrade 101.12
262 TestPause/serial/SecondStartNoReconfiguration 9.94
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/StartWithK8s 23.8
282 TestNetworkPlugins/group/false 3.77
286 TestNoKubernetes/serial/StartWithStopK8s 23.42
287 TestNoKubernetes/serial/Start 4.17
288 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
290 TestNoKubernetes/serial/ProfileList 4.08
291 TestNoKubernetes/serial/Stop 1.26
292 TestNoKubernetes/serial/StartNoArgs 6.45
293 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
295 TestStartStop/group/old-k8s-version/serial/FirstStart 53.24
297 TestStartStop/group/no-preload/serial/FirstStart 48.18
298 TestStartStop/group/old-k8s-version/serial/DeployApp 7.23
300 TestStartStop/group/old-k8s-version/serial/Stop 16.07
301 TestStartStop/group/no-preload/serial/DeployApp 7.24
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
303 TestStartStop/group/old-k8s-version/serial/SecondStart 44.68
305 TestStartStop/group/no-preload/serial/Stop 16.28
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 52.56
309 TestStartStop/group/embed-certs/serial/FirstStart 38.45
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.93
316 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
317 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
318 TestStartStop/group/embed-certs/serial/DeployApp 8.24
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
322 TestStartStop/group/embed-certs/serial/Stop 17.35
324 TestStartStop/group/newest-cni/serial/FirstStart 26.94
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
326 TestStartStop/group/embed-certs/serial/SecondStart 51.95
327 TestStartStop/group/newest-cni/serial/DeployApp 0
329 TestStartStop/group/newest-cni/serial/Stop 12.84
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
331 TestStartStop/group/newest-cni/serial/SecondStart 10.75
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.24
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.27
339 TestNetworkPlugins/group/auto/Start 38.82
340 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
342 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 44.41
343 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
346 TestNetworkPlugins/group/kindnet/Start 43.26
347 TestNetworkPlugins/group/auto/KubeletFlags 0.32
348 TestNetworkPlugins/group/auto/NetCatPod 9.21
349 TestNetworkPlugins/group/calico/Start 50.9
350 TestNetworkPlugins/group/auto/DNS 0.13
351 TestNetworkPlugins/group/auto/Localhost 0.12
352 TestNetworkPlugins/group/auto/HairPin 0.13
353 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
357 TestNetworkPlugins/group/custom-flannel/Start 51.93
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
360 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
361 TestNetworkPlugins/group/enable-default-cni/Start 64.77
362 TestNetworkPlugins/group/kindnet/DNS 0.13
363 TestNetworkPlugins/group/kindnet/Localhost 0.11
364 TestNetworkPlugins/group/kindnet/HairPin 0.11
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.39
367 TestNetworkPlugins/group/calico/NetCatPod 9.22
368 TestNetworkPlugins/group/flannel/Start 48.51
369 TestNetworkPlugins/group/calico/DNS 0.11
370 TestNetworkPlugins/group/calico/Localhost 0.09
371 TestNetworkPlugins/group/calico/HairPin 0.09
372 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
373 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.19
374 TestNetworkPlugins/group/bridge/Start 62.87
375 TestNetworkPlugins/group/custom-flannel/DNS 0.13
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.1
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.08
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.08
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
385 TestNetworkPlugins/group/flannel/NetCatPod 10.18
386 TestNetworkPlugins/group/flannel/DNS 0.1
387 TestNetworkPlugins/group/flannel/Localhost 0.08
388 TestNetworkPlugins/group/flannel/HairPin 0.08
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 9.16
391 TestNetworkPlugins/group/bridge/DNS 0.11
392 TestNetworkPlugins/group/bridge/Localhost 0.08
393 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (5.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-797272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-797272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.486269931s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1119 21:47:02.693479   12829 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1119 21:47:02.693564   12829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-797272
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-797272: exit status 85 (70.382792ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-797272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-797272 │ jenkins │ v1.37.0 │ 19 Nov 25 21:46 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:46:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:46:57.255507   12841 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:46:57.255777   12841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:46:57.255787   12841 out.go:374] Setting ErrFile to fd 2...
	I1119 21:46:57.255793   12841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:46:57.255985   12841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	W1119 21:46:57.256132   12841 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21918-9335/.minikube/config/config.json: open /home/jenkins/minikube-integration/21918-9335/.minikube/config/config.json: no such file or directory
	I1119 21:46:57.256636   12841 out.go:368] Setting JSON to true
	I1119 21:46:57.257482   12841 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1765,"bootTime":1763587052,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:46:57.257568   12841 start.go:143] virtualization: kvm guest
	I1119 21:46:57.259517   12841 out.go:99] [download-only-797272] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1119 21:46:57.259634   12841 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball: no such file or directory
	I1119 21:46:57.259694   12841 notify.go:221] Checking for updates...
	I1119 21:46:57.260757   12841 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:46:57.261967   12841 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:46:57.263140   12841 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:46:57.264252   12841 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 21:46:57.265339   12841 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 21:46:57.267276   12841 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:46:57.267530   12841 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:46:57.289147   12841 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:46:57.289238   12841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:46:57.706931   12841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-19 21:46:57.697146003 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:46:57.707029   12841 docker.go:319] overlay module found
	I1119 21:46:57.708691   12841 out.go:99] Using the docker driver based on user configuration
	I1119 21:46:57.708725   12841 start.go:309] selected driver: docker
	I1119 21:46:57.708734   12841 start.go:930] validating driver "docker" against <nil>
	I1119 21:46:57.708852   12841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:46:57.763453   12841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-19 21:46:57.754179315 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:46:57.763700   12841 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:46:57.764241   12841 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1119 21:46:57.764430   12841 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:46:57.765919   12841 out.go:171] Using Docker driver with root privileges
	I1119 21:46:57.766947   12841 cni.go:84] Creating CNI manager for ""
	I1119 21:46:57.767010   12841 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1119 21:46:57.767023   12841 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1119 21:46:57.767079   12841 start.go:353] cluster config:
	{Name:download-only-797272 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-797272 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:46:57.768217   12841 out.go:99] Starting "download-only-797272" primary control-plane node in "download-only-797272" cluster
	I1119 21:46:57.768231   12841 cache.go:134] Beginning downloading kic base image for docker with crio
	I1119 21:46:57.769181   12841 out.go:99] Pulling base image v0.0.48-1763561786-21918 ...
	I1119 21:46:57.769210   12841 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 21:46:57.769332   12841 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local docker daemon
	I1119 21:46:57.785503   12841 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:46:57.785666   12841 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 in local cache directory
	I1119 21:46:57.785774   12841 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 to local cache
	I1119 21:46:57.798760   12841 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1119 21:46:57.798779   12841 cache.go:65] Caching tarball of preloaded images
	I1119 21:46:57.798910   12841 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 21:46:57.800431   12841 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1119 21:46:57.800448   12841 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1119 21:46:57.826315   12841 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1119 21:46:57.826396   12841 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1119 21:47:01.164886   12841 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1119 21:47:01.165208   12841 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/download-only-797272/config.json ...
	I1119 21:47:01.165239   12841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/download-only-797272/config.json: {Name:mk70e3ca3e1435492e03fccfb42758bc474e9fc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1119 21:47:01.165402   12841 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1119 21:47:01.166034   12841 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21918-9335/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-797272 host does not exist
	  To start a cluster, run: "minikube start -p download-only-797272"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-797272
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-761775 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-761775 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.12701414s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1119 21:47:07.234247   12829 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1119 21:47:07.234282   12829 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-761775
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-761775: exit status 85 (65.584056ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-797272 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-797272 │ jenkins │ v1.37.0 │ 19 Nov 25 21:46 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ delete  │ -p download-only-797272                                                                                                                                                   │ download-only-797272 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │ 19 Nov 25 21:47 UTC │
	│ start   │ -o=json --download-only -p download-only-761775 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-761775 │ jenkins │ v1.37.0 │ 19 Nov 25 21:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/19 21:47:03
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1119 21:47:03.154965   13204 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:47:03.155171   13204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:03.155179   13204 out.go:374] Setting ErrFile to fd 2...
	I1119 21:47:03.155183   13204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:47:03.155372   13204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:47:03.155755   13204 out.go:368] Setting JSON to true
	I1119 21:47:03.156502   13204 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1771,"bootTime":1763587052,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:47:03.156587   13204 start.go:143] virtualization: kvm guest
	I1119 21:47:03.158166   13204 out.go:99] [download-only-761775] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:47:03.158329   13204 notify.go:221] Checking for updates...
	I1119 21:47:03.159531   13204 out.go:171] MINIKUBE_LOCATION=21918
	I1119 21:47:03.160660   13204 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:47:03.161760   13204 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:47:03.162835   13204 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 21:47:03.163945   13204 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1119 21:47:03.165867   13204 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1119 21:47:03.166069   13204 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:47:03.187433   13204 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:47:03.187500   13204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:03.241634   13204 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-19 21:47:03.232920149 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:03.241763   13204 docker.go:319] overlay module found
	I1119 21:47:03.243055   13204 out.go:99] Using the docker driver based on user configuration
	I1119 21:47:03.243083   13204 start.go:309] selected driver: docker
	I1119 21:47:03.243092   13204 start.go:930] validating driver "docker" against <nil>
	I1119 21:47:03.243163   13204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:47:03.300405   13204 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-19 21:47:03.29189149 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:47:03.300536   13204 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1119 21:47:03.301015   13204 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1119 21:47:03.301164   13204 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1119 21:47:03.302800   13204 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-761775 host does not exist
	  To start a cluster, run: "minikube start -p download-only-761775"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-761775
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.37s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-279060 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-279060" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-279060
--- PASS: TestDownloadOnlyKic (0.37s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1119 21:47:08.280058   12829 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-562747 --alsologtostderr --binary-mirror http://127.0.0.1:39249 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-562747" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-562747
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (86.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-328669 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-328669 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m24.130364054s)
helpers_test.go:175: Cleaning up "offline-crio-328669" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-328669
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-328669: (2.454876175s)
--- PASS: TestOffline (86.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-418049
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-418049: exit status 85 (61.433382ms)

                                                
                                                
-- stdout --
	* Profile "addons-418049" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-418049"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-418049
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-418049: exit status 85 (58.343688ms)

                                                
                                                
-- stdout --
	* Profile "addons-418049" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-418049"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (123.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-418049 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-418049 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m3.230998321s)
--- PASS: TestAddons/Setup (123.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-418049 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-418049 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-418049 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-418049 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [84377201-011c-4184-9a7a-dbe9ff9a1d89] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [84377201-011c-4184-9a7a-dbe9ff9a1d89] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.002966548s
addons_test.go:694: (dbg) Run:  kubectl --context addons-418049 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-418049 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-418049 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-418049
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-418049: (18.249895955s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-418049
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-418049
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-418049
--- PASS: TestAddons/StoppedEnableDisable (18.52s)

                                                
                                    
x
+
TestCertOptions (27.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-844532 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-844532 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.235137717s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-844532 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-844532 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-844532 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-844532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-844532
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-844532: (2.505914007s)
--- PASS: TestCertOptions (27.41s)

                                                
                                    
x
+
TestCertExpiration (222.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-855818 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-855818 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (32.040443561s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-855818 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.506234837s)
helpers_test.go:175: Cleaning up "cert-expiration-855818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-855818
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-855818: (3.014153892s)
--- PASS: TestCertExpiration (222.56s)

                                                
                                    
x
+
TestForceSystemdFlag (32.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-631541 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-631541 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.363315077s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-631541 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-631541" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-631541
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-631541: (2.369669384s)
--- PASS: TestForceSystemdFlag (32.01s)

                                                
                                    
x
+
TestForceSystemdEnv (34.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-630141 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-630141 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.578617854s)
helpers_test.go:175: Cleaning up "force-systemd-env-630141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-630141
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-630141: (3.058148467s)
--- PASS: TestForceSystemdEnv (34.64s)

                                                
                                    
x
+
TestErrorSpam/setup (20.25s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-706490 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-706490 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-706490 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-706490 --driver=docker  --container-runtime=crio: (20.24499954s)
--- PASS: TestErrorSpam/setup (20.25s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (6.44s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause: exit status 80 (2.318845981s)

                                                
                                                
-- stdout --
	* Pausing node nospam-706490 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause: exit status 80 (1.888370809s)

                                                
                                                
-- stdout --
	* Pausing node nospam-706490 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause: exit status 80 (2.235265425s)

                                                
                                                
-- stdout --
	* Pausing node nospam-706490 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:52:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.44s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.47s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause: exit status 80 (1.918597153s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-706490 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause: exit status 80 (1.785377987s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-706490 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause: exit status 80 (1.769567466s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-706490 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-19T21:53:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.47s)

                                                
                                    
x
+
TestErrorSpam/stop (2.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 stop: (2.325989759s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-706490 --log_dir /tmp/nospam-706490 stop
--- PASS: TestErrorSpam/stop (2.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21918-9335/.minikube/files/etc/test/nested/copy/12829/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037096 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-037096 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.291312669s)
--- PASS: TestFunctional/serial/StartWithProxy (40.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1119 21:53:51.227211   12829 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037096 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-037096 --alsologtostderr -v=8: (5.963136015s)
functional_test.go:678: soft start took 5.964771094s for "functional-037096" cluster.
I1119 21:53:57.191712   12829 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-037096 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-037096 /tmp/TestFunctionalserialCacheCmdcacheadd_local1927129956/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cache add minikube-local-cache-test:functional-037096
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cache delete minikube-local-cache-test:functional-037096
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-037096
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (263.005786ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 kubectl -- --context functional-037096 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-037096 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037096 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1119 21:54:12.911991   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:12.918358   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:12.929740   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:12.951047   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:12.992362   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:13.073687   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:13.235150   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:13.556807   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:14.198805   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:15.480145   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:18.042989   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:23.164620   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:54:33.406627   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-037096 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.000888745s)
functional_test.go:776: restart took 46.00105116s for "functional-037096" cluster.
I1119 21:54:49.424332   12829 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (46.00s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-037096 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-037096 logs: (1.091755965s)
--- PASS: TestFunctional/serial/LogsCmd (1.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 logs --file /tmp/TestFunctionalserialLogsFileCmd2366247390/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-037096 logs --file /tmp/TestFunctionalserialLogsFileCmd2366247390/001/logs.txt: (1.117776045s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.12s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.83s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-037096 apply -f testdata/invalidsvc.yaml
E1119 21:54:53.888001   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-037096
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-037096: exit status 115 (322.086139ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30812 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-037096 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.83s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 config get cpus: exit status 14 (85.176216ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 config get cpus: exit status 14 (80.009188ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-037096 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-037096 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 51922: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037096 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-037096 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (155.405978ms)

                                                
                                                
-- stdout --
	* [functional-037096] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:55:17.087426   51519 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:55:17.087667   51519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:17.087676   51519 out.go:374] Setting ErrFile to fd 2...
	I1119 21:55:17.087679   51519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:17.087885   51519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:55:17.088250   51519 out.go:368] Setting JSON to false
	I1119 21:55:17.089112   51519 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2265,"bootTime":1763587052,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:55:17.089215   51519 start.go:143] virtualization: kvm guest
	I1119 21:55:17.090994   51519 out.go:179] * [functional-037096] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 21:55:17.092115   51519 notify.go:221] Checking for updates...
	I1119 21:55:17.092156   51519 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:55:17.093278   51519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:55:17.094726   51519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:55:17.095946   51519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 21:55:17.097009   51519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:55:17.098179   51519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:55:17.099625   51519 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:17.100127   51519 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:55:17.122636   51519 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:55:17.122694   51519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:55:17.179918   51519 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 21:55:17.170748016 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:55:17.180017   51519 docker.go:319] overlay module found
	I1119 21:55:17.181609   51519 out.go:179] * Using the docker driver based on existing profile
	I1119 21:55:17.182579   51519 start.go:309] selected driver: docker
	I1119 21:55:17.182593   51519 start.go:930] validating driver "docker" against &{Name:functional-037096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-037096 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:55:17.182666   51519 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:55:17.184201   51519 out.go:203] 
	W1119 21:55:17.185264   51519 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1119 21:55:17.186243   51519 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037096 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-037096 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-037096 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (157.242424ms)

                                                
                                                
-- stdout --
	* [functional-037096] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 21:55:16.931632   51434 out.go:360] Setting OutFile to fd 1 ...
	I1119 21:55:16.932038   51434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:16.932059   51434 out.go:374] Setting ErrFile to fd 2...
	I1119 21:55:16.932174   51434 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 21:55:16.932542   51434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 21:55:16.933012   51434 out.go:368] Setting JSON to false
	I1119 21:55:16.933911   51434 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2265,"bootTime":1763587052,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 21:55:16.933989   51434 start.go:143] virtualization: kvm guest
	I1119 21:55:16.936118   51434 out.go:179] * [functional-037096] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1119 21:55:16.937327   51434 notify.go:221] Checking for updates...
	I1119 21:55:16.937346   51434 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 21:55:16.938566   51434 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 21:55:16.939647   51434 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 21:55:16.940788   51434 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 21:55:16.941892   51434 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 21:55:16.942920   51434 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 21:55:16.944438   51434 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 21:55:16.944903   51434 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 21:55:16.967755   51434 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 21:55:16.967827   51434 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 21:55:17.023710   51434 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-19 21:55:17.013666077 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 21:55:17.023806   51434 docker.go:319] overlay module found
	I1119 21:55:17.025620   51434 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1119 21:55:17.026801   51434 start.go:309] selected driver: docker
	I1119 21:55:17.026835   51434 start.go:930] validating driver "docker" against &{Name:functional-037096 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763561786-21918@sha256:7d857ffd31ff83715b29c3208933c3dc8deb87751fbabf3dc1f90cf1a3da6865 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-037096 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1119 21:55:17.026928   51434 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 21:55:17.028418   51434 out.go:203] 
	W1119 21:55:17.029433   51434 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1119 21:55:17.030430   51434 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [77c5463d-0a80-42ef-a439-24afc095a477] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003694123s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-037096 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-037096 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-037096 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-037096 apply -f testdata/storage-provisioner/pod.yaml
I1119 21:55:03.659459   12829 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [cabb1486-61e2-4ab9-9289-dd90c8fd6aca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [cabb1486-61e2-4ab9-9289-dd90c8fd6aca] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.002677507s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-037096 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-037096 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-037096 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [96e40260-eb9e-4840-a734-6395f7770092] Pending
helpers_test.go:352: "sp-pod" [96e40260-eb9e-4840-a734-6395f7770092] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [96e40260-eb9e-4840-a734-6395f7770092] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003545553s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-037096 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.47s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh -n functional-037096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cp functional-037096:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4252348927/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh -n functional-037096 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh -n functional-037096 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-037096 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-bc8bj" [32ef886b-b3ce-4a13-9471-b6088409f274] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/11/19 21:55:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-bc8bj" [32ef886b-b3ce-4a13-9471-b6088409f274] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.002914373s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-037096 exec mysql-5bb876957f-bc8bj -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-037096 exec mysql-5bb876957f-bc8bj -- mysql -ppassword -e "show databases;": exit status 1 (82.816533ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1119 21:55:37.079997   12829 retry.go:31] will retry after 1.284119126s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-037096 exec mysql-5bb876957f-bc8bj -- mysql -ppassword -e "show databases;"
E1119 21:56:56.771652   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:59:12.911627   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 21:59:40.613714   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:04:12.911582   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (16.65s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12829/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo cat /etc/test/nested/copy/12829/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12829.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo cat /etc/ssl/certs/12829.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12829.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo cat /usr/share/ca-certificates/12829.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/128292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo cat /etc/ssl/certs/128292.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/128292.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo cat /usr/share/ca-certificates/128292.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-037096 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 ssh "sudo systemctl is-active docker": exit status 1 (290.227053ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 ssh "sudo systemctl is-active containerd": exit status 1 (287.090805ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037096 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/my-image                      │ functional-037096  │ a04c60e8a0cbf │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037096 image ls --format table --alsologtostderr:
I1119 21:55:32.248500   53756 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:32.248720   53756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:32.248729   53756 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:32.248733   53756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:32.248893   53756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
I1119 21:55:32.249420   53756 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:32.249530   53756 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:32.249869   53756 cli_runner.go:164] Run: docker container inspect functional-037096 --format={{.State.Status}}
I1119 21:55:32.266964   53756 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:32.267004   53756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037096
I1119 21:55:32.282974   53756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/functional-037096/id_rsa Username:docker}
I1119 21:55:32.371768   53756 ssh_runner.go:195] Run: sudo crictl images --output json
E1119 21:55:34.850089   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037096 image ls --format json --alsologtostderr:
[{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":
"43824855"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f
9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io
/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0
530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"18924742e91de717ac6b9e462112b0b3089c9bd2bf1918b9e70bb4f40f88f242","repoDigests":["docker.io/library/d50bcedd879712235ea9746fc062b0f6a7f398109a5da48d1db50844065b55ca-tmp@sha256:e3941d85c3ee405a0f5c6275415f624a7e25b2aafe8ef0f6226c90d5a18bcb92"],"repoTags":[],"size":"1466132"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0
fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io
/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"a04c60e8a0cbf9efeffb59dfceef26d89ca1c9eddf9b7dd173b922061e502f05","repoDigests":["localhost/my-image@sha256:24e70b11c0c893a72a44d75b998862765320165fd5677e163050ecda6fa79ddb"],"repoTags":["localhost/my-image:functional-037096"],"size":"1468744"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037096 image ls --format json --alsologtostderr:
I1119 21:55:32.041134   53701 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:32.041385   53701 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:32.041396   53701 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:32.041402   53701 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:32.041622   53701 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
I1119 21:55:32.042157   53701 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:32.042279   53701 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:32.042670   53701 cli_runner.go:164] Run: docker container inspect functional-037096 --format={{.State.Status}}
I1119 21:55:32.060261   53701 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:32.060296   53701 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037096
I1119 21:55:32.076576   53701 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/functional-037096/id_rsa Username:docker}
I1119 21:55:32.165602   53701 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037096 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: a04c60e8a0cbf9efeffb59dfceef26d89ca1c9eddf9b7dd173b922061e502f05
repoDigests:
- localhost/my-image@sha256:24e70b11c0c893a72a44d75b998862765320165fd5677e163050ecda6fa79ddb
repoTags:
- localhost/my-image:functional-037096
size: "1468744"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 18924742e91de717ac6b9e462112b0b3089c9bd2bf1918b9e70bb4f40f88f242
repoDigests:
- docker.io/library/d50bcedd879712235ea9746fc062b0f6a7f398109a5da48d1db50844065b55ca-tmp@sha256:e3941d85c3ee405a0f5c6275415f624a7e25b2aafe8ef0f6226c90d5a18bcb92
repoTags: []
size: "1466132"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037096 image ls --format yaml --alsologtostderr:
I1119 21:55:31.831283   53644 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:31.831518   53644 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:31.831527   53644 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:31.831531   53644 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:31.831698   53644 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
I1119 21:55:31.832212   53644 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:31.832299   53644 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:31.832660   53644 cli_runner.go:164] Run: docker container inspect functional-037096 --format={{.State.Status}}
I1119 21:55:31.849956   53644 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:31.849993   53644 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037096
I1119 21:55:31.866793   53644 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/functional-037096/id_rsa Username:docker}
I1119 21:55:31.955530   53644 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 ssh pgrep buildkitd: exit status 1 (317.96743ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image build -t localhost/my-image:functional-037096 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-037096 image build -t localhost/my-image:functional-037096 testdata/build --alsologtostderr: (1.777400053s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-037096 image build -t localhost/my-image:functional-037096 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 18924742e91
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-037096
--> a04c60e8a0c
Successfully tagged localhost/my-image:functional-037096
a04c60e8a0cbf9efeffb59dfceef26d89ca1c9eddf9b7dd173b922061e502f05
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-037096 image build -t localhost/my-image:functional-037096 testdata/build --alsologtostderr:
I1119 21:55:29.848136   53051 out.go:360] Setting OutFile to fd 1 ...
I1119 21:55:29.848384   53051 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:29.848393   53051 out.go:374] Setting ErrFile to fd 2...
I1119 21:55:29.848397   53051 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1119 21:55:29.848565   53051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
I1119 21:55:29.849109   53051 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:29.849689   53051 config.go:182] Loaded profile config "functional-037096": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1119 21:55:29.850097   53051 cli_runner.go:164] Run: docker container inspect functional-037096 --format={{.State.Status}}
I1119 21:55:29.867468   53051 ssh_runner.go:195] Run: systemctl --version
I1119 21:55:29.867524   53051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-037096
I1119 21:55:29.884417   53051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/functional-037096/id_rsa Username:docker}
I1119 21:55:29.975009   53051 build_images.go:162] Building image from path: /tmp/build.2728142934.tar
I1119 21:55:29.975103   53051 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1119 21:55:29.983136   53051 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2728142934.tar
I1119 21:55:29.986680   53051 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2728142934.tar: stat -c "%s %y" /var/lib/minikube/build/build.2728142934.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2728142934.tar': No such file or directory
I1119 21:55:29.986704   53051 ssh_runner.go:362] scp /tmp/build.2728142934.tar --> /var/lib/minikube/build/build.2728142934.tar (3072 bytes)
I1119 21:55:30.004003   53051 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2728142934
I1119 21:55:30.011392   53051 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2728142934 -xf /var/lib/minikube/build/build.2728142934.tar
I1119 21:55:30.020317   53051 crio.go:315] Building image: /var/lib/minikube/build/build.2728142934
I1119 21:55:30.020369   53051 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-037096 /var/lib/minikube/build/build.2728142934 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1119 21:55:31.546477   53051 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-037096 /var/lib/minikube/build/build.2728142934 --cgroup-manager=cgroupfs: (1.526080411s)
I1119 21:55:31.546577   53051 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2728142934
I1119 21:55:31.554316   53051 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2728142934.tar
I1119 21:55:31.561369   53051 build_images.go:218] Built localhost/my-image:functional-037096 from /tmp/build.2728142934.tar
I1119 21:55:31.561398   53051 build_images.go:134] succeeded building to: functional-037096
I1119 21:55:31.561404   53051 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.191302971s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-037096
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-037096 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-037096 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-037096 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-037096 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 46640: os: process already finished
helpers_test.go:525: unable to kill pid 46352: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-037096 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-037096 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [25a4c9c3-6850-43ef-9a05-47138d694205] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [25a4c9c3-6850-43ef-9a05-47138d694205] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003723652s
I1119 21:55:05.898377   12829 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image rm kicbase/echo-server:functional-037096 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-037096 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.189.104 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-037096 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "385.558025ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "66.446051ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "328.768571ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "55.213745ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdany-port2504038562/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763589307376732147" to /tmp/TestFunctionalparallelMountCmdany-port2504038562/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763589307376732147" to /tmp/TestFunctionalparallelMountCmdany-port2504038562/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763589307376732147" to /tmp/TestFunctionalparallelMountCmdany-port2504038562/001/test-1763589307376732147
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.576878ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 21:55:07.653610   12829 retry.go:31] will retry after 323.463491ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 19 21:55 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 19 21:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 19 21:55 test-1763589307376732147
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh cat /mount-9p/test-1763589307376732147
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-037096 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [6976d7c8-04d8-4e95-8d28-b37b3024085c] Pending
helpers_test.go:352: "busybox-mount" [6976d7c8-04d8-4e95-8d28-b37b3024085c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [6976d7c8-04d8-4e95-8d28-b37b3024085c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [6976d7c8-04d8-4e95-8d28-b37b3024085c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.00247321s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-037096 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdany-port2504038562/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdspecific-port1061123341/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (289.877583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 21:55:13.090355   12829 retry.go:31] will retry after 300.000819ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh -- ls -la /mount-9p
I1119 21:55:13.691005   12829 detect.go:223] nested VM detected
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdspecific-port1061123341/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 ssh "sudo umount -f /mount-9p": exit status 1 (257.565068ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-037096 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdspecific-port1061123341/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T" /mount1: exit status 1 (343.513087ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1119 21:55:14.729756   12829 retry.go:31] will retry after 438.434105ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-037096 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-037096 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3833065531/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-037096 service list: (1.679496305s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-037096 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-037096 service list -o json: (1.682518645s)
functional_test.go:1504: Took "1.682624273s" to run "out/minikube-linux-amd64 -p functional-037096 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-037096
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-037096
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-037096
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m12.532922343s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (133.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 kubectl -- rollout status deployment/busybox: (2.043444396s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-9jlvn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-s86pc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-scvdw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-9jlvn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-s86pc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-scvdw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-9jlvn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-s86pc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-scvdw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-9jlvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-9jlvn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-s86pc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-s86pc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-scvdw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 kubectl -- exec busybox-7b57f96db7-scvdw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 node add --alsologtostderr -v 5: (56.089593328s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-194577 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp testdata/cp-test.txt ha-194577:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1346796854/001/cp-test_ha-194577.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577:/home/docker/cp-test.txt ha-194577-m02:/home/docker/cp-test_ha-194577_ha-194577-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m02 "sudo cat /home/docker/cp-test_ha-194577_ha-194577-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577:/home/docker/cp-test.txt ha-194577-m03:/home/docker/cp-test_ha-194577_ha-194577-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m03 "sudo cat /home/docker/cp-test_ha-194577_ha-194577-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577:/home/docker/cp-test.txt ha-194577-m04:/home/docker/cp-test_ha-194577_ha-194577-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m04 "sudo cat /home/docker/cp-test_ha-194577_ha-194577-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp testdata/cp-test.txt ha-194577-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1346796854/001/cp-test_ha-194577-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m02:/home/docker/cp-test.txt ha-194577:/home/docker/cp-test_ha-194577-m02_ha-194577.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577 "sudo cat /home/docker/cp-test_ha-194577-m02_ha-194577.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m02:/home/docker/cp-test.txt ha-194577-m03:/home/docker/cp-test_ha-194577-m02_ha-194577-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m03 "sudo cat /home/docker/cp-test_ha-194577-m02_ha-194577-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m02:/home/docker/cp-test.txt ha-194577-m04:/home/docker/cp-test_ha-194577-m02_ha-194577-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m04 "sudo cat /home/docker/cp-test_ha-194577-m02_ha-194577-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp testdata/cp-test.txt ha-194577-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1346796854/001/cp-test_ha-194577-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m03:/home/docker/cp-test.txt ha-194577:/home/docker/cp-test_ha-194577-m03_ha-194577.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577 "sudo cat /home/docker/cp-test_ha-194577-m03_ha-194577.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m03:/home/docker/cp-test.txt ha-194577-m02:/home/docker/cp-test_ha-194577-m03_ha-194577-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m02 "sudo cat /home/docker/cp-test_ha-194577-m03_ha-194577-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m03:/home/docker/cp-test.txt ha-194577-m04:/home/docker/cp-test_ha-194577-m03_ha-194577-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m04 "sudo cat /home/docker/cp-test_ha-194577-m03_ha-194577-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp testdata/cp-test.txt ha-194577-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1346796854/001/cp-test_ha-194577-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m04:/home/docker/cp-test.txt ha-194577:/home/docker/cp-test_ha-194577-m04_ha-194577.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577 "sudo cat /home/docker/cp-test_ha-194577-m04_ha-194577.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m04:/home/docker/cp-test.txt ha-194577-m02:/home/docker/cp-test_ha-194577-m04_ha-194577-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m02 "sudo cat /home/docker/cp-test_ha-194577-m04_ha-194577-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 cp ha-194577-m04:/home/docker/cp-test.txt ha-194577-m03:/home/docker/cp-test_ha-194577-m04_ha-194577-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 ssh -n ha-194577-m03 "sudo cat /home/docker/cp-test_ha-194577-m04_ha-194577-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 node stop m02 --alsologtostderr -v 5: (13.578705197s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5: exit status 7 (650.63005ms)

                                                
                                                
-- stdout --
	ha-194577
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-194577-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-194577-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-194577-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:08:58.050193   77942 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:08:58.050551   77942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:08:58.050563   77942 out.go:374] Setting ErrFile to fd 2...
	I1119 22:08:58.050567   77942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:08:58.050801   77942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:08:58.050985   77942 out.go:368] Setting JSON to false
	I1119 22:08:58.051015   77942 mustload.go:66] Loading cluster: ha-194577
	I1119 22:08:58.051104   77942 notify.go:221] Checking for updates...
	I1119 22:08:58.051383   77942 config.go:182] Loaded profile config "ha-194577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:08:58.051397   77942 status.go:174] checking status of ha-194577 ...
	I1119 22:08:58.051948   77942 cli_runner.go:164] Run: docker container inspect ha-194577 --format={{.State.Status}}
	I1119 22:08:58.069838   77942 status.go:371] ha-194577 host status = "Running" (err=<nil>)
	I1119 22:08:58.069861   77942 host.go:66] Checking if "ha-194577" exists ...
	I1119 22:08:58.070078   77942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-194577
	I1119 22:08:58.088379   77942 host.go:66] Checking if "ha-194577" exists ...
	I1119 22:08:58.088630   77942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:08:58.088684   77942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-194577
	I1119 22:08:58.106227   77942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/ha-194577/id_rsa Username:docker}
	I1119 22:08:58.194521   77942 ssh_runner.go:195] Run: systemctl --version
	I1119 22:08:58.200489   77942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:08:58.212007   77942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:08:58.266161   77942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-19 22:08:58.256841872 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:08:58.266681   77942 kubeconfig.go:125] found "ha-194577" server: "https://192.168.49.254:8443"
	I1119 22:08:58.266706   77942 api_server.go:166] Checking apiserver status ...
	I1119 22:08:58.266740   77942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:08:58.278261   77942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	W1119 22:08:58.286378   77942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:08:58.286425   77942 ssh_runner.go:195] Run: ls
	I1119 22:08:58.289865   77942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 22:08:58.293738   77942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 22:08:58.293758   77942 status.go:463] ha-194577 apiserver status = Running (err=<nil>)
	I1119 22:08:58.293766   77942 status.go:176] ha-194577 status: &{Name:ha-194577 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:08:58.293784   77942 status.go:174] checking status of ha-194577-m02 ...
	I1119 22:08:58.294043   77942 cli_runner.go:164] Run: docker container inspect ha-194577-m02 --format={{.State.Status}}
	I1119 22:08:58.310697   77942 status.go:371] ha-194577-m02 host status = "Stopped" (err=<nil>)
	I1119 22:08:58.310713   77942 status.go:384] host is not running, skipping remaining checks
	I1119 22:08:58.310718   77942 status.go:176] ha-194577-m02 status: &{Name:ha-194577-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:08:58.310743   77942 status.go:174] checking status of ha-194577-m03 ...
	I1119 22:08:58.311033   77942 cli_runner.go:164] Run: docker container inspect ha-194577-m03 --format={{.State.Status}}
	I1119 22:08:58.329546   77942 status.go:371] ha-194577-m03 host status = "Running" (err=<nil>)
	I1119 22:08:58.329565   77942 host.go:66] Checking if "ha-194577-m03" exists ...
	I1119 22:08:58.329867   77942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-194577-m03
	I1119 22:08:58.345735   77942 host.go:66] Checking if "ha-194577-m03" exists ...
	I1119 22:08:58.346009   77942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:08:58.346047   77942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-194577-m03
	I1119 22:08:58.362500   77942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/ha-194577-m03/id_rsa Username:docker}
	I1119 22:08:58.450635   77942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:08:58.462884   77942 kubeconfig.go:125] found "ha-194577" server: "https://192.168.49.254:8443"
	I1119 22:08:58.462907   77942 api_server.go:166] Checking apiserver status ...
	I1119 22:08:58.462935   77942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:08:58.472919   77942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W1119 22:08:58.480698   77942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:08:58.480729   77942 ssh_runner.go:195] Run: ls
	I1119 22:08:58.484157   77942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1119 22:08:58.488147   77942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1119 22:08:58.488168   77942 status.go:463] ha-194577-m03 apiserver status = Running (err=<nil>)
	I1119 22:08:58.488179   77942 status.go:176] ha-194577-m03 status: &{Name:ha-194577-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:08:58.488202   77942 status.go:174] checking status of ha-194577-m04 ...
	I1119 22:08:58.488447   77942 cli_runner.go:164] Run: docker container inspect ha-194577-m04 --format={{.State.Status}}
	I1119 22:08:58.505528   77942 status.go:371] ha-194577-m04 host status = "Running" (err=<nil>)
	I1119 22:08:58.505544   77942 host.go:66] Checking if "ha-194577-m04" exists ...
	I1119 22:08:58.505777   77942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-194577-m04
	I1119 22:08:58.522533   77942 host.go:66] Checking if "ha-194577-m04" exists ...
	I1119 22:08:58.522775   77942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:08:58.522808   77942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-194577-m04
	I1119 22:08:58.540532   77942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/ha-194577-m04/id_rsa Username:docker}
	I1119 22:08:58.628198   77942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:08:58.640093   77942 status.go:176] ha-194577-m04 status: &{Name:ha-194577-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 node start m02 --alsologtostderr -v 5
E1119 22:09:12.911568   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 node start m02 --alsologtostderr -v 5: (14.007356508s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 stop --alsologtostderr -v 5
E1119 22:09:57.328318   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:57.334672   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:57.346018   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:57.367317   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:57.408635   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:57.489969   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:57.651420   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:57.973036   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:58.615016   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:09:59.896563   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:10:02.458417   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:10:07.580375   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 stop --alsologtostderr -v 5: (56.816312408s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 start --wait true --alsologtostderr -v 5
E1119 22:10:17.822488   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:10:35.975403   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:10:38.304504   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 start --wait true --alsologtostderr -v 5: (58.850256962s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (115.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 node delete m03 --alsologtostderr -v 5
E1119 22:11:19.266086   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 node delete m03 --alsologtostderr -v 5: (9.20042615s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 stop --alsologtostderr -v 5: (47.512486159s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5: exit status 7 (110.391042ms)

                                                
                                                
-- stdout --
	ha-194577
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-194577-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-194577-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:12:09.040341   92356 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:12:09.040616   92356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:12:09.040626   92356 out.go:374] Setting ErrFile to fd 2...
	I1119 22:12:09.040631   92356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:12:09.040829   92356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:12:09.040992   92356 out.go:368] Setting JSON to false
	I1119 22:12:09.041018   92356 mustload.go:66] Loading cluster: ha-194577
	I1119 22:12:09.041112   92356 notify.go:221] Checking for updates...
	I1119 22:12:09.041369   92356 config.go:182] Loaded profile config "ha-194577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:12:09.041383   92356 status.go:174] checking status of ha-194577 ...
	I1119 22:12:09.041762   92356 cli_runner.go:164] Run: docker container inspect ha-194577 --format={{.State.Status}}
	I1119 22:12:09.060828   92356 status.go:371] ha-194577 host status = "Stopped" (err=<nil>)
	I1119 22:12:09.060850   92356 status.go:384] host is not running, skipping remaining checks
	I1119 22:12:09.060857   92356 status.go:176] ha-194577 status: &{Name:ha-194577 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:12:09.060899   92356 status.go:174] checking status of ha-194577-m02 ...
	I1119 22:12:09.061244   92356 cli_runner.go:164] Run: docker container inspect ha-194577-m02 --format={{.State.Status}}
	I1119 22:12:09.078342   92356 status.go:371] ha-194577-m02 host status = "Stopped" (err=<nil>)
	I1119 22:12:09.078358   92356 status.go:384] host is not running, skipping remaining checks
	I1119 22:12:09.078363   92356 status.go:176] ha-194577-m02 status: &{Name:ha-194577-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:12:09.078376   92356 status.go:174] checking status of ha-194577-m04 ...
	I1119 22:12:09.078592   92356 cli_runner.go:164] Run: docker container inspect ha-194577-m04 --format={{.State.Status}}
	I1119 22:12:09.095098   92356 status.go:371] ha-194577-m04 host status = "Stopped" (err=<nil>)
	I1119 22:12:09.095116   92356 status.go:384] host is not running, skipping remaining checks
	I1119 22:12:09.095121   92356 status.go:176] ha-194577-m04 status: &{Name:ha-194577-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1119 22:12:41.188297   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.215031728s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 node add --control-plane --alsologtostderr -v 5
E1119 22:14:12.911675   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-194577 node add --control-plane --alsologtostderr -v 5: (1m14.465139319s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-194577 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-538999 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1119 22:14:57.327686   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:15:25.029923   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-538999 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m6.457807031s)
--- PASS: TestJSONOutput/start/Command (66.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-538999 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-538999 --output=json --user=testUser: (7.911145856s)
--- PASS: TestJSONOutput/stop/Command (7.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-658759 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-658759 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (72.104418ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db3ac0dc-f815-4383-b2b2-cb0c6a26d568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-658759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1da86757-2e11-43f6-b913-5ac880086266","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"239e68d7-a33f-41b6-a5fa-6034dc7f3f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c194600-a430-43cd-bfe6-fb6887b9a691","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig"}}
	{"specversion":"1.0","id":"d1af7399-4f9d-4a22-b16d-e8755e2f7293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube"}}
	{"specversion":"1.0","id":"7cab6352-78bc-4f83-a87e-efdd6bfc72d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a2678e22-4739-4322-8b67-6e625c13d806","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"890cf71e-6008-4b0e-8d95-782c62bb29fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-658759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-658759
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-344599 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-344599 --network=: (26.533104045s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-344599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-344599
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-344599: (2.087335894s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.64s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-365166 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-365166 --network=bridge: (20.586709023s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-365166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-365166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-365166: (1.946195131s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.55s)

                                                
                                    
x
+
TestKicExistingNetwork (23.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1119 22:16:42.459473   12829 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1119 22:16:42.475151   12829 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1119 22:16:42.475203   12829 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1119 22:16:42.475217   12829 cli_runner.go:164] Run: docker network inspect existing-network
W1119 22:16:42.490496   12829 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1119 22:16:42.490519   12829 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1119 22:16:42.490532   12829 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1119 22:16:42.490654   12829 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1119 22:16:42.506477   12829 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cde0f356bd10 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ba:b5:fa:ba:e0:a6} reservation:<nil>}
I1119 22:16:42.506862   12829 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a6300}
I1119 22:16:42.506904   12829 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1119 22:16:42.506947   12829 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1119 22:16:42.549748   12829 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-769836 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-769836 --network=existing-network: (21.82667057s)
helpers_test.go:175: Cleaning up "existing-network-769836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-769836
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-769836: (1.942532955s)
I1119 22:17:06.335516   12829 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.89s)

                                                
                                    
x
+
TestKicCustomSubnet (22.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-887593 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-887593 --subnet=192.168.60.0/24: (20.392271187s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-887593 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-887593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-887593
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-887593: (2.065041206s)
--- PASS: TestKicCustomSubnet (22.48s)

                                                
                                    
x
+
TestKicStaticIP (26.92s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-807993 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-807993 --static-ip=192.168.200.200: (24.70931067s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-807993 ip
helpers_test.go:175: Cleaning up "static-ip-807993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-807993
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-807993: (2.072177742s)
--- PASS: TestKicStaticIP (26.92s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (46.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-206856 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-206856 --driver=docker  --container-runtime=crio: (19.795636219s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-209159 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-209159 --driver=docker  --container-runtime=crio: (20.911427651s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-206856
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-209159
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-209159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-209159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-209159: (2.278330187s)
helpers_test.go:175: Cleaning up "first-206856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-206856
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-206856: (2.276259143s)
--- PASS: TestMinikubeProfile (46.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-208497 --memory=3072 --mount-string /tmp/TestMountStartserial3095338483/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-208497 --memory=3072 --mount-string /tmp/TestMountStartserial3095338483/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.972859265s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-208497 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-219826 --memory=3072 --mount-string /tmp/TestMountStartserial3095338483/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-219826 --memory=3072 --mount-string /tmp/TestMountStartserial3095338483/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.674453106s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-219826 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-208497 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-208497 --alsologtostderr -v=5: (1.644750635s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-219826 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-219826
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-219826: (1.233110284s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.14s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-219826
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-219826: (6.136752222s)
--- PASS: TestMountStart/serial/RestartStopped (7.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-219826 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-656622 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1119 22:19:12.911602   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:19:57.328350   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-656622 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m32.309471385s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-656622 -- rollout status deployment/busybox: (1.758454522s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-7zzw7 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-dkkq4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-7zzw7 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-dkkq4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-7zzw7 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-dkkq4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-7zzw7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-7zzw7 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-dkkq4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-656622 -- exec busybox-7b57f96db7-dkkq4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-656622 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-656622 -v=5 --alsologtostderr: (52.652580183s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.27s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-656622 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp testdata/cp-test.txt multinode-656622:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2813850973/001/cp-test_multinode-656622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622:/home/docker/cp-test.txt multinode-656622-m02:/home/docker/cp-test_multinode-656622_multinode-656622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m02 "sudo cat /home/docker/cp-test_multinode-656622_multinode-656622-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622:/home/docker/cp-test.txt multinode-656622-m03:/home/docker/cp-test_multinode-656622_multinode-656622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m03 "sudo cat /home/docker/cp-test_multinode-656622_multinode-656622-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp testdata/cp-test.txt multinode-656622-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2813850973/001/cp-test_multinode-656622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622-m02:/home/docker/cp-test.txt multinode-656622:/home/docker/cp-test_multinode-656622-m02_multinode-656622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622 "sudo cat /home/docker/cp-test_multinode-656622-m02_multinode-656622.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622-m02:/home/docker/cp-test.txt multinode-656622-m03:/home/docker/cp-test_multinode-656622-m02_multinode-656622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m03 "sudo cat /home/docker/cp-test_multinode-656622-m02_multinode-656622-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp testdata/cp-test.txt multinode-656622-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2813850973/001/cp-test_multinode-656622-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622-m03:/home/docker/cp-test.txt multinode-656622:/home/docker/cp-test_multinode-656622-m03_multinode-656622.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622 "sudo cat /home/docker/cp-test_multinode-656622-m03_multinode-656622.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 cp multinode-656622-m03:/home/docker/cp-test.txt multinode-656622-m02:/home/docker/cp-test_multinode-656622-m03_multinode-656622-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 ssh -n multinode-656622-m02 "sudo cat /home/docker/cp-test_multinode-656622-m03_multinode-656622-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-656622 node stop m03: (1.253858643s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-656622 status: exit status 7 (461.847039ms)

                                                
                                                
-- stdout --
	multinode-656622
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-656622-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-656622-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-656622 status --alsologtostderr: exit status 7 (469.339354ms)

                                                
                                                
-- stdout --
	multinode-656622
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-656622-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-656622-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:21:46.179108  153031 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:21:46.179314  153031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:21:46.179322  153031 out.go:374] Setting ErrFile to fd 2...
	I1119 22:21:46.179326  153031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:21:46.179531  153031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:21:46.179678  153031 out.go:368] Setting JSON to false
	I1119 22:21:46.179706  153031 mustload.go:66] Loading cluster: multinode-656622
	I1119 22:21:46.179790  153031 notify.go:221] Checking for updates...
	I1119 22:21:46.180053  153031 config.go:182] Loaded profile config "multinode-656622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:21:46.180065  153031 status.go:174] checking status of multinode-656622 ...
	I1119 22:21:46.180457  153031 cli_runner.go:164] Run: docker container inspect multinode-656622 --format={{.State.Status}}
	I1119 22:21:46.199726  153031 status.go:371] multinode-656622 host status = "Running" (err=<nil>)
	I1119 22:21:46.199756  153031 host.go:66] Checking if "multinode-656622" exists ...
	I1119 22:21:46.200010  153031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-656622
	I1119 22:21:46.216860  153031 host.go:66] Checking if "multinode-656622" exists ...
	I1119 22:21:46.217122  153031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:21:46.217168  153031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-656622
	I1119 22:21:46.233605  153031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/multinode-656622/id_rsa Username:docker}
	I1119 22:21:46.322449  153031 ssh_runner.go:195] Run: systemctl --version
	I1119 22:21:46.328303  153031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:21:46.339521  153031 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:21:46.397337  153031 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-19 22:21:46.387928331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:21:46.397868  153031 kubeconfig.go:125] found "multinode-656622" server: "https://192.168.67.2:8443"
	I1119 22:21:46.397898  153031 api_server.go:166] Checking apiserver status ...
	I1119 22:21:46.397930  153031 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1119 22:21:46.408942  153031 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup
	W1119 22:21:46.416737  153031 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1257/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1119 22:21:46.416773  153031 ssh_runner.go:195] Run: ls
	I1119 22:21:46.420004  153031 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1119 22:21:46.423760  153031 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1119 22:21:46.423777  153031 status.go:463] multinode-656622 apiserver status = Running (err=<nil>)
	I1119 22:21:46.423786  153031 status.go:176] multinode-656622 status: &{Name:multinode-656622 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:21:46.423800  153031 status.go:174] checking status of multinode-656622-m02 ...
	I1119 22:21:46.424022  153031 cli_runner.go:164] Run: docker container inspect multinode-656622-m02 --format={{.State.Status}}
	I1119 22:21:46.440516  153031 status.go:371] multinode-656622-m02 host status = "Running" (err=<nil>)
	I1119 22:21:46.440533  153031 host.go:66] Checking if "multinode-656622-m02" exists ...
	I1119 22:21:46.440747  153031 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-656622-m02
	I1119 22:21:46.456883  153031 host.go:66] Checking if "multinode-656622-m02" exists ...
	I1119 22:21:46.457094  153031 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1119 22:21:46.457124  153031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-656622-m02
	I1119 22:21:46.472779  153031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21918-9335/.minikube/machines/multinode-656622-m02/id_rsa Username:docker}
	I1119 22:21:46.560397  153031 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1119 22:21:46.572082  153031 status.go:176] multinode-656622-m02 status: &{Name:multinode-656622-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:21:46.572109  153031 status.go:174] checking status of multinode-656622-m03 ...
	I1119 22:21:46.572353  153031 cli_runner.go:164] Run: docker container inspect multinode-656622-m03 --format={{.State.Status}}
	I1119 22:21:46.589548  153031 status.go:371] multinode-656622-m03 host status = "Stopped" (err=<nil>)
	I1119 22:21:46.589565  153031 status.go:384] host is not running, skipping remaining checks
	I1119 22:21:46.589569  153031 status.go:176] multinode-656622-m03 status: &{Name:multinode-656622-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-656622 node start m03 -v=5 --alsologtostderr: (6.584901116s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (59.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-656622
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-656622
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-656622: (31.232395457s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-656622 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-656622 --wait=true -v=5 --alsologtostderr: (28.12695126s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-656622
--- PASS: TestMultiNode/serial/RestartKeepsNodes (59.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-656622 node delete m03: (4.37433883s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-656622 stop: (30.549085036s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-656622 status: exit status 7 (92.255299ms)

                                                
                                                
-- stdout --
	multinode-656622
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-656622-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-656622 status --alsologtostderr: exit status 7 (91.065415ms)

                                                
                                                
-- stdout --
	multinode-656622
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-656622-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:23:28.937811  162596 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:23:28.938075  162596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:23:28.938085  162596 out.go:374] Setting ErrFile to fd 2...
	I1119 22:23:28.938091  162596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:23:28.938284  162596 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:23:28.938481  162596 out.go:368] Setting JSON to false
	I1119 22:23:28.938516  162596 mustload.go:66] Loading cluster: multinode-656622
	I1119 22:23:28.938606  162596 notify.go:221] Checking for updates...
	I1119 22:23:28.938929  162596 config.go:182] Loaded profile config "multinode-656622": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:23:28.938946  162596 status.go:174] checking status of multinode-656622 ...
	I1119 22:23:28.939323  162596 cli_runner.go:164] Run: docker container inspect multinode-656622 --format={{.State.Status}}
	I1119 22:23:28.957321  162596 status.go:371] multinode-656622 host status = "Stopped" (err=<nil>)
	I1119 22:23:28.957339  162596 status.go:384] host is not running, skipping remaining checks
	I1119 22:23:28.957346  162596 status.go:176] multinode-656622 status: &{Name:multinode-656622 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1119 22:23:28.957368  162596 status.go:174] checking status of multinode-656622-m02 ...
	I1119 22:23:28.957605  162596 cli_runner.go:164] Run: docker container inspect multinode-656622-m02 --format={{.State.Status}}
	I1119 22:23:28.974468  162596 status.go:371] multinode-656622-m02 host status = "Stopped" (err=<nil>)
	I1119 22:23:28.974488  162596 status.go:384] host is not running, skipping remaining checks
	I1119 22:23:28.974495  162596 status.go:176] multinode-656622-m02 status: &{Name:multinode-656622-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.73s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-656622 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1119 22:24:12.912208   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-656622 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.289286589s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-656622 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-656622
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-656622-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-656622-m02 --driver=docker  --container-runtime=crio: exit status 14 (70.874317ms)

                                                
                                                
-- stdout --
	* [multinode-656622-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-656622-m02' is duplicated with machine name 'multinode-656622-m02' in profile 'multinode-656622'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-656622-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-656622-m03 --driver=docker  --container-runtime=crio: (19.966411435s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-656622
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-656622: exit status 80 (267.586114ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-656622 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-656622-m03 already exists in multinode-656622-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-656622-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-656622-m03: (2.250238153s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.61s)

                                                
                                    
x
+
TestPreload (103.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-837234 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1119 22:24:57.327870   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-837234 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.198284424s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-837234 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-837234
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-837234: (5.788531584s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-837234 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1119 22:26:20.393179   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-837234 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.95173792s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-837234 image list
helpers_test.go:175: Cleaning up "test-preload-837234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-837234
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-837234: (2.361046223s)
--- PASS: TestPreload (103.40s)

                                                
                                    
x
+
TestScheduledStopUnix (97s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-849026 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-849026 --memory=3072 --driver=docker  --container-runtime=crio: (21.098103444s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849026 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:26:47.125909  179678 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:26:47.126148  179678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:26:47.126156  179678 out.go:374] Setting ErrFile to fd 2...
	I1119 22:26:47.126160  179678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:26:47.126314  179678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:26:47.126516  179678 out.go:368] Setting JSON to false
	I1119 22:26:47.126628  179678 mustload.go:66] Loading cluster: scheduled-stop-849026
	I1119 22:26:47.126964  179678 config.go:182] Loaded profile config "scheduled-stop-849026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:26:47.127043  179678 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/config.json ...
	I1119 22:26:47.127245  179678 mustload.go:66] Loading cluster: scheduled-stop-849026
	I1119 22:26:47.127344  179678 config.go:182] Loaded profile config "scheduled-stop-849026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-849026 -n scheduled-stop-849026
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:26:47.484423  179827 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:26:47.484535  179827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:26:47.484545  179827 out.go:374] Setting ErrFile to fd 2...
	I1119 22:26:47.484549  179827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:26:47.484767  179827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:26:47.485038  179827 out.go:368] Setting JSON to false
	I1119 22:26:47.485219  179827 daemonize_unix.go:73] killing process 179713 as it is an old scheduled stop
	I1119 22:26:47.485332  179827 mustload.go:66] Loading cluster: scheduled-stop-849026
	I1119 22:26:47.485766  179827 config.go:182] Loaded profile config "scheduled-stop-849026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:26:47.485869  179827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/config.json ...
	I1119 22:26:47.486077  179827 mustload.go:66] Loading cluster: scheduled-stop-849026
	I1119 22:26:47.486218  179827 config.go:182] Loaded profile config "scheduled-stop-849026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1119 22:26:47.491352   12829 retry.go:31] will retry after 99.062µs: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.492532   12829 retry.go:31] will retry after 98.098µs: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.493697   12829 retry.go:31] will retry after 157.8µs: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.494856   12829 retry.go:31] will retry after 455.348µs: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.495999   12829 retry.go:31] will retry after 715.069µs: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.497128   12829 retry.go:31] will retry after 840.071µs: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.498257   12829 retry.go:31] will retry after 730.585µs: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.499386   12829 retry.go:31] will retry after 1.724477ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.501604   12829 retry.go:31] will retry after 3.192669ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.505800   12829 retry.go:31] will retry after 2.70955ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.509009   12829 retry.go:31] will retry after 7.883154ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.517200   12829 retry.go:31] will retry after 5.60817ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.523385   12829 retry.go:31] will retry after 9.687775ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.533631   12829 retry.go:31] will retry after 10.410588ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.544845   12829 retry.go:31] will retry after 23.930743ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
I1119 22:26:47.569079   12829 retry.go:31] will retry after 39.830886ms: open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849026 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849026 -n scheduled-stop-849026
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-849026
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-849026 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1119 22:27:13.317310  180384 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:27:13.317576  180384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:27:13.317586  180384 out.go:374] Setting ErrFile to fd 2...
	I1119 22:27:13.317591  180384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:27:13.317785  180384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:27:13.318047  180384 out.go:368] Setting JSON to false
	I1119 22:27:13.318124  180384 mustload.go:66] Loading cluster: scheduled-stop-849026
	I1119 22:27:13.318424  180384 config.go:182] Loaded profile config "scheduled-stop-849026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:27:13.318792  180384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/scheduled-stop-849026/config.json ...
	I1119 22:27:13.319060  180384 mustload.go:66] Loading cluster: scheduled-stop-849026
	I1119 22:27:13.319224  180384 config.go:182] Loaded profile config "scheduled-stop-849026": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
E1119 22:27:15.976777   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-849026
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-849026: exit status 7 (77.72575ms)

                                                
                                                
-- stdout --
	scheduled-stop-849026
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849026 -n scheduled-stop-849026
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-849026 -n scheduled-stop-849026: exit status 7 (74.146162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-849026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-849026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-849026: (4.496274397s)
--- PASS: TestScheduledStopUnix (97.00s)

                                                
                                    
x
+
TestInsufficientStorage (9.27s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-060026 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-060026 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.874468788s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"33012e10-44a4-483c-af08-d54b327cce6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-060026] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d6c6d0a-0437-4a3d-9499-4ba5531de070","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21918"}}
	{"specversion":"1.0","id":"744249f1-37b3-4036-8abe-b2d6382c29ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6367bfa3-a28c-43f6-8287-31070498a163","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig"}}
	{"specversion":"1.0","id":"4820c8de-d4f0-4677-8429-1ad0023e139a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube"}}
	{"specversion":"1.0","id":"6ec2d7f9-7d45-457b-889d-7b4d007f15e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"46df2d05-33b5-4b7f-b575-58f7a4b26c75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"814640a8-8d6b-43e6-8c36-b14d3a566f38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c2d7e09b-5916-4052-9ee7-e2009eea5b34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2f7e3839-5467-484b-b5f8-11d61c3b9335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c11d7a90-8a28-43f0-b6ef-4734f0361fb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a9fe0236-4554-45b8-b1b3-04c4c42d0d54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-060026\" primary control-plane node in \"insufficient-storage-060026\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"13738c0c-7241-4cee-a658-d0572e5c936a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763561786-21918 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0654bc78-ddfa-4ac6-a933-6b3b14b9c407","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6d7c02fa-dffa-49df-ac0a-9ca16d9b68e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-060026 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-060026 --output=json --layout=cluster: exit status 7 (273.224572ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-060026","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-060026","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 22:28:10.106159  182897 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-060026" does not appear in /home/jenkins/minikube-integration/21918-9335/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-060026 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-060026 --output=json --layout=cluster: exit status 7 (271.750549ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-060026","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-060026","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1119 22:28:10.378957  183009 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-060026" does not appear in /home/jenkins/minikube-integration/21918-9335/kubeconfig
	E1119 22:28:10.388944  183009 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/insufficient-storage-060026/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-060026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-060026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-060026: (1.845849971s)
--- PASS: TestInsufficientStorage (9.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2941565213 start -p running-upgrade-083468 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2941565213 start -p running-upgrade-083468 --memory=3072 --vm-driver=docker  --container-runtime=crio: (24.165854852s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-083468 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-083468 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.848633606s)
helpers_test.go:175: Cleaning up "running-upgrade-083468" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-083468
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-083468: (2.467927297s)
--- PASS: TestRunningBinaryUpgrade (50.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (305.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.336531636s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-801704
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-801704: (2.210462499s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-801704 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-801704 status --format={{.Host}}: exit status 7 (77.673778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.196006397s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-801704 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (83.252148ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-801704] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-801704
	    minikube start -p kubernetes-upgrade-801704 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8017042 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-801704 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-801704 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.517930379s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-801704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-801704
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-801704: (2.718122239s)
--- PASS: TestKubernetesUpgrade (305.21s)

                                                
                                    
x
+
TestMissingContainerUpgrade (73.59s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1882682332 start -p missing-upgrade-015670 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1882682332 start -p missing-upgrade-015670 --memory=3072 --driver=docker  --container-runtime=crio: (20.49703859s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-015670
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-015670: (10.438049286s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-015670
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-015670 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-015670 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.729295136s)
helpers_test.go:175: Cleaning up "missing-upgrade-015670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-015670
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-015670: (2.379239609s)
--- PASS: TestMissingContainerUpgrade (73.59s)

                                                
                                    
x
+
TestPause/serial/Start (54.99s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-340203 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-340203 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.988534982s)
--- PASS: TestPause/serial/Start (54.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (101.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3323739450 start -p stopped-upgrade-459977 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3323739450 start -p stopped-upgrade-459977 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m19.986643077s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3323739450 -p stopped-upgrade-459977 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3323739450 -p stopped-upgrade-459977 stop: (2.461207556s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-459977 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-459977 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.669281117s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (101.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (9.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-340203 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1119 22:29:12.911770   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-340203 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (9.929589146s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (9.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-459977
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-459977: (1.1300019s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-662839 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-662839 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.175298ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-662839] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-662839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-662839 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.463950447s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-662839 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-654834 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-654834 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (181.133851ms)

                                                
                                                
-- stdout --
	* [false-654834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21918
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1119 22:30:03.391878  214092 out.go:360] Setting OutFile to fd 1 ...
	I1119 22:30:03.391992  214092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:30:03.391999  214092 out.go:374] Setting ErrFile to fd 2...
	I1119 22:30:03.392003  214092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1119 22:30:03.392224  214092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21918-9335/.minikube/bin
	I1119 22:30:03.392632  214092 out.go:368] Setting JSON to false
	I1119 22:30:03.393780  214092 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4351,"bootTime":1763587052,"procs":299,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1119 22:30:03.393846  214092 start.go:143] virtualization: kvm guest
	I1119 22:30:03.395565  214092 out.go:179] * [false-654834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1119 22:30:03.397438  214092 out.go:179]   - MINIKUBE_LOCATION=21918
	I1119 22:30:03.397477  214092 notify.go:221] Checking for updates...
	I1119 22:30:03.401289  214092 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1119 22:30:03.403346  214092 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21918-9335/kubeconfig
	I1119 22:30:03.405094  214092 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21918-9335/.minikube
	I1119 22:30:03.406328  214092 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1119 22:30:03.407989  214092 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1119 22:30:03.409947  214092 config.go:182] Loaded profile config "NoKubernetes-662839": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:30:03.410074  214092 config.go:182] Loaded profile config "cert-expiration-855818": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1119 22:30:03.410199  214092 config.go:182] Loaded profile config "running-upgrade-083468": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1119 22:30:03.410385  214092 driver.go:422] Setting default libvirt URI to qemu:///system
	I1119 22:30:03.438416  214092 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1119 22:30:03.438499  214092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1119 22:30:03.500302  214092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-19 22:30:03.489886459 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1119 22:30:03.500407  214092 docker.go:319] overlay module found
	I1119 22:30:03.501756  214092 out.go:179] * Using the docker driver based on user configuration
	I1119 22:30:03.502767  214092 start.go:309] selected driver: docker
	I1119 22:30:03.502781  214092 start.go:930] validating driver "docker" against <nil>
	I1119 22:30:03.502791  214092 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1119 22:30:03.504601  214092 out.go:203] 
	W1119 22:30:03.505793  214092 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1119 22:30:03.506839  214092 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-654834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-654834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-855818
contexts:
- context:
cluster: cert-expiration-855818
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-855818
name: cert-expiration-855818
current-context: ""
kind: Config
users:
- name: cert-expiration-855818
user:
client-certificate: /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/client.crt
client-key: /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-654834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-654834"

                                                
                                                
----------------------- debugLogs end: false-654834 [took: 3.404455821s] --------------------------------
helpers_test.go:175: Cleaning up "false-654834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-654834
--- PASS: TestNetworkPlugins/group/false (3.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.174193514s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-662839 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-662839 status -o json: exit status 2 (294.59474ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-662839","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-662839
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-662839: (1.95402968s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-662839 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.166808733s)
--- PASS: TestNoKubernetes/serial/Start (4.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21918-9335/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-662839 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-662839 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.293025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.411750437s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-662839
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-662839: (1.256577348s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-662839 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-662839 --driver=docker  --container-runtime=crio: (6.454126982s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-662839 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-662839 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.305346ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.243357459s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (48.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.180896356s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (48.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-680619 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e61d10ef-eb12-4b20-83e7-48341a04a48a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e61d10ef-eb12-4b20-83e7-48341a04a48a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003539051s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-680619 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-680619 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-680619 --alsologtostderr -v=3: (16.069807205s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-178067 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6825c6dc-8105-48a5-9e63-ebb599a140e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6825c6dc-8105-48a5-9e63-ebb599a140e5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004908403s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-178067 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680619 -n old-k8s-version-680619
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680619 -n old-k8s-version-680619: exit status 7 (78.650125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-680619 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-680619 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (44.343333376s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-680619 -n old-k8s-version-680619
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-178067 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-178067 --alsologtostderr -v=3: (16.279068501s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-178067 -n no-preload-178067
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-178067 -n no-preload-178067: exit status 7 (80.63141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-178067 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-178067 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.222022089s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-178067 -n no-preload-178067
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (38.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (38.44806035s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (38.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gv4nv" [742e4e38-0bcd-405e-8b42-aa37e875d6b6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003129266s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-gv4nv" [742e4e38-0bcd-405e-8b42-aa37e875d6b6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002695741s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-680619 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-680619 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m11.929897882s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c59j5" [f8269a0a-de0d-47c5-9c97-10c1e631b6eb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002766137s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-c59j5" [f8269a0a-de0d-47c5-9c97-10c1e631b6eb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002941483s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-178067 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-443380 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6ec43358-1e3e-4de9-acb0-6df760321c64] Pending
helpers_test.go:352: "busybox" [6ec43358-1e3e-4de9-acb0-6df760321c64] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6ec43358-1e3e-4de9-acb0-6df760321c64] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004287343s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-443380 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-178067 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-443380 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-443380 --alsologtostderr -v=3: (17.352640268s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (26.937018307s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443380 -n embed-certs-443380
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443380 -n embed-certs-443380: exit status 7 (85.466122ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-443380 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1119 22:34:12.911841   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/addons-418049/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-443380 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.614868193s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-443380 -n embed-certs-443380
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-949690 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-949690 --alsologtostderr -v=3: (12.838765573s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-949690 -n newest-cni-949690
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-949690 -n newest-cni-949690: exit status 7 (80.681101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-949690 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-949690 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.409013672s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-949690 -n newest-cni-949690
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-409987 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [39ac96a7-8375-46cb-869f-436b0889fd78] Pending
helpers_test.go:352: "busybox" [39ac96a7-8375-46cb-869f-436b0889fd78] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [39ac96a7-8375-46cb-869f-436b0889fd78] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004058081s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-409987 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-949690 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-409987 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-409987 --alsologtostderr -v=3: (16.274035385s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1119 22:34:57.328005   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/functional-037096/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (38.821377212s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mmf4r" [5d678ef9-cff7-48f6-b954-b87ef278aff0] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003190742s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987: exit status 7 (87.934533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-409987 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-409987 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.05761835s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-409987 -n default-k8s-diff-port-409987
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (44.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mmf4r" [5d678ef9-cff7-48f6-b954-b87ef278aff0] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003435908s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-443380 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-443380 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (43.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (43.257116574s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (43.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-654834 "pgrep -a kubelet"
I1119 22:35:31.044235   12829 config.go:182] Loaded profile config "auto-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-654834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wqhqx" [95d1add9-d7a7-43e2-8cdc-a80fad587d74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wqhqx" [95d1add9-d7a7-43e2-8cdc-a80fad587d74] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004216864s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.897043163s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-654834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dcs8c" [731791a6-3fa2-4329-9563-847063f17875] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003789871s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dcs8c" [731791a6-3fa2-4329-9563-847063f17875] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003169526s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-409987 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-409987 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.93120386s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-4q5sg" [5f6b15ae-41b4-4c6f-8c51-9cda0916998d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003765378s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-654834 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-654834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5nvnd" [4b659b5e-5779-4852-ae25-2acdcea5aa87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5nvnd" [4b659b5e-5779-4852-ae25-2acdcea5aa87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.002999272s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m4.772300043s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-654834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-dc7bc" [3f00a0a6-c331-4020-a692-1285700c1fa0] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-dc7bc" [3f00a0a6-c331-4020-a692-1285700c1fa0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004843124s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-654834 "pgrep -a kubelet"
I1119 22:36:31.359650   12829 config.go:182] Loaded profile config "calico-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-654834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rqw2h" [8aef21df-f813-4be8-bbbd-75b2e54e2796] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rqw2h" [8aef21df-f813-4be8-bbbd-75b2e54e2796] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.003619007s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.511961556s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-654834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-654834 "pgrep -a kubelet"
I1119 22:36:52.224611   12829 config.go:182] Loaded profile config "custom-flannel-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-654834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4q956" [052a765b-c4aa-47ae-bc3c-41aaf46d7a79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1119 22:36:53.726735   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:36:53.733114   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:36:53.744474   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:36:53.765802   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:36:53.807241   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:36:53.889876   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:36:54.051884   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:36:54.373547   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:36:55.015613   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-4q956" [052a765b-c4aa-47ae-bc3c-41aaf46d7a79] Running
E1119 22:36:56.297632   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003395175s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-654834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.865134167s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-654834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-654834 "pgrep -a kubelet"
I1119 22:37:15.093778   12829 config.go:182] Loaded profile config "enable-default-cni-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-654834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rnw7s" [5b5d2d26-46ae-419a-b2a5-14a61fe3d30c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1119 22:37:15.460408   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1119 22:37:18.022564   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rnw7s" [5b5d2d26-46ae-419a-b2a5-14a61fe3d30c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003372273s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-654834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-qd8z8" [a6d59680-ca5f-48c4-9372-1e837059e3c1] Running
E1119 22:37:33.387193   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/no-preload-178067/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003920258s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-654834 "pgrep -a kubelet"
I1119 22:37:34.634805   12829 config.go:182] Loaded profile config "flannel-654834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-654834 replace --force -f testdata/netcat-deployment.yaml
E1119 22:37:34.707406   12829 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/old-k8s-version-680619/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2hbb2" [a71706b1-6585-45ee-9096-cdd77d0acadf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2hbb2" [a71706b1-6585-45ee-9096-cdd77d0acadf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003202393s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-654834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-654834 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-654834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hdt2z" [7768305c-ef32-4148-ae55-cb36fd52f50a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hdt2z" [7768305c-ef32-4148-ae55-cb36fd52f50a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.002655955s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-654834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-654834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-726490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-726490
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-654834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-654834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-855818
contexts:
- context:
cluster: cert-expiration-855818
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-855818
name: cert-expiration-855818
current-context: ""
kind: Config
users:
- name: cert-expiration-855818
user:
client-certificate: /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/client.crt
client-key: /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-654834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-654834"

                                                
                                                
----------------------- debugLogs end: kubenet-654834 [took: 3.629575287s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-654834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-654834
--- SKIP: TestNetworkPlugins/group/kubenet (3.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-654834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-654834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-855818
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21918-9335/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:30:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: running-upgrade-083468
contexts:
- context:
cluster: cert-expiration-855818
extensions:
- extension:
last-update: Wed, 19 Nov 2025 22:29:49 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-855818
name: cert-expiration-855818
- context:
cluster: running-upgrade-083468
user: running-upgrade-083468
name: running-upgrade-083468
current-context: running-upgrade-083468
kind: Config
users:
- name: cert-expiration-855818
user:
client-certificate: /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/client.crt
client-key: /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/cert-expiration-855818/client.key
- name: running-upgrade-083468
user:
client-certificate: /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/running-upgrade-083468/client.crt
client-key: /home/jenkins/minikube-integration/21918-9335/.minikube/profiles/running-upgrade-083468/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-654834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-654834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-654834"

                                                
                                                
----------------------- debugLogs end: cilium-654834 [took: 3.699569358s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-654834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-654834
--- SKIP: TestNetworkPlugins/group/cilium (3.88s)

                                                
                                    
Copied to clipboard